Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Lukasz Stafiniak
On 10/12/07, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
> some of us are much impressed by it.  Anyone with even a surface grasp
> of the basic concept on a math level will realize that there's no
> difference between self-modifying and writing an outside copy of
> yourself, but *either one* involves the sort of issues I've been
> calling "reflective".
>
Well, this could be at least a definition of "self-modifying" ;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53163374-01d6ba


Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Eliezer S. Yudkowsky

Tim Freeman wrote:


My point is that if one is worried about a self-improving Seed AI
exploding, one should also be worried about any AI that competently
writes software exploding.


There *is* a slight gap between competently writing software and 
competently writing minds.  Large by human standards, not much by 
interspecies standards.  It does involve new math issues, which is why 
some of us are much impressed by it.  Anyone with even a surface grasp 
of the basic concept on a math level will realize that there's no 
difference between self-modifying and writing an outside copy of 
yourself, but *either one* involves the sort of issues I've been 
calling "reflective".


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53161697-a947ab


RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Linas Vepstas:> > >Let's take Novamente as an example. ... It cannot improve 
itself> > >until the following things happen:> > >> > >1) It acquires the 
knowledge and skills to become a competent> > > programmer, a task that takes a 
human many years of directed> > > training and practical experience.> > Wrong. 
This was hashed to death in previous emails; and then again > probably several 
more times before I joined the list. > > Anyone care to assemble a position 
paper on "self improvement"> that reviews the situation? I'm slightly irritated 
by the > recurring speculation and misunderstanding.
Ok, the conversation was about how Novamente could recursively self-improve 
itself into a runaway hard takeoff scenario.
 
You're claiming that it can do so without the knowledge or skills of a 
competent programmer, with the very convincing argument "Wrong".  Care to 
elaborate at all?  Or is your only purpose to communicate your slight 
irritation?
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53154037-90851f

Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
From: Derek Zahn <[EMAIL PROTECTED]>
>You seem to think that self-reference buys you nothing at all since it
>is a simple matter for the first AGI projects to reinvent their own
>equivalent from scratch, but I'm not sure that's true.

The "from scratch" part is a straw-man argument.  The AGI project will
have lots of resources to draw on; it could read Hutter's papers or
license HTM or incorporate lots of other packages or existing AI projects
that it might find useful.

My point is that if one is worried about a self-improving Seed AI
exploding, one should also be worried about any AI that competently
writes software exploding.  Keeping its source code secret from
itself doesn't help much.  Hmm, I suppose an AI that does mechanical
engineering could explode too, perhaps by doing nanotech, so AI's
competently doing engineering is a risk in general.

-- 
Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53137773-858c34


Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Linas Vepstas
> >Let's take Novamente as an example.  ... It cannot improve itself
> >until the following things happen:
> >
> >1) It acquires the knowledge and skills to become a competent
> >   programmer, a task that takes a human many years of directed
> >   training and practical experience.

Wrong. This was hashed to death in previous emails; and then again 
probably several more times before I joined the list. 

Anyone care to assemble a position paper on "self improvement"
that reviews the situation?  I'm slightly irritated by the 
recurring speculation and misunderstanding.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53103798-a02f8e


Re: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Vladimir Nesov
Derek, Tim,

There is no oversight: self-improvement doesn't necessarily refer to
actual instance of self that is to be improved, but to AGI's design.
Next thing must be better than previous one for runaway progress to
happen, and one way of doing it is for next thing to be a refinement
of previous thing. Self-improvement 'in place' may depending on nature
of improvement be preferable, if it provides a way to efficiently
transfer acquired knowledge from previous version to the next one
(probably even without any modification).

On 10/12/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
> Tim Freeman:
>
> > No value is
> > added by introducing considerations about self-reference into
> > conversations about the consequences of AI engineering.
> >
> > Junior geeks do find it impressive, though.
>
>  The point of that conversation was to illustrate that if people are worried
> about Seed AI exploding, then one option is to not build Seed AI (since that
> is only one approach to developing AGI, and in fact I do not know of any
> actual project that includes it at present).  Quoting Yudkowsky:
>
>  > The task is not to build an AI with some astronomical level
>  > of intelligence; the task is building an AI which is capable
>  > of improving itself, of understanding and rewriting its own
>  > source code.
>
>  Perhaps only "junior geeks" like him find the concept relevant.  You seem
> to think that self-reference buys you nothing at all since it is a simple
> matter for the first AGI projects to reinvent their own equivalent from
> scratch, but I'm not sure that's true.
>
> 
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53025431-7e3757


Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-12 Thread Linas Vepstas
On Wed, Oct 10, 2007 at 01:22:26PM -0400, Richard Loosemore wrote:
> 
> Am I the only one, or does anyone else agree that politics/political 
> theorising is not appropriate on the AGI list?

Yes, and I'm sorry I triggred the thread. 

> I particularly object to libertarianism being shoved down our throats, 
> not so much because I disagree with it, but because so much of the 
> singularity / extropian / futurist discussion universe is dominated by it.

Why is that?  Before this, the last libertarian I ran across was 
a few decades ago. And yet, here, they are legion. Why is that?
Does libertarian philosphy make people more open-minded to ideas
such as the singularity? Make them bigger dreamers? Make them more
willing to explore alternatives, even as the rest of the world 
explores the latest hollywood movie?

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53020802-76f4d8


RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman:> No value is> added by introducing considerations about 
self-reference into> conversations about the consequences of AI engineering.> > 
Junior geeks do find it impressive, though.
The point of that conversation was to illustrate that if people are worried 
about Seed AI exploding, then one option is to not build Seed AI (since that is 
only one approach to developing AGI, and in fact I do not know of any actual 
project that includes it at present).  Quoting Yudkowsky:
 
> The task is not to build an AI with some astronomical level 
> of intelligence; the task is building an AI which is capable 
> of improving itself, of understanding and rewriting its own 
> source code.
 
Perhaps only "junior geeks" like him find the concept relevant.  You seem to 
think that self-reference buys you nothing at all since it is a simple matter 
for the first AGI projects to reinvent their own equivalent from scratch, but 
I'm not sure that's true.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53018769-30e88d

RE: Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Derek Zahn
Tim Freeman writes:> >Let's take Novamente as an example. ... It cannot improve 
itself> >until the following things happen:> >> >1) It acquires the knowledge 
and skills to become a competent> > programmer, a task that takes a human many 
years of directed> > training and practical experience.> > > >2) It is given 
access to its own implementation and permission to alter it.> > > >3) It 
understands its own implementation well enough to make a helpful change.> >...> 
> I agree that resource #1, competent programming, is essential for any> 
interesting takeoff scenario. I don't think the other two matter,> though.
Ok, this alternative scenario -- where Novamente secretly reinvents the 
theoretical foundations needed for AGI development, designs its successor from 
those first principles, and somehow hijacks an equivalent or superior 
supercomputer to receive the de novo design and surreptitiously trains it to 
superhuman capacity -- should also be protected against.  It's a fairly 
ridiculous scenario, but for completeness should be mentioned.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=53003223-9d4579

Self-improvement is not a special case (was Re: [agi] Religion-free technical content)

2007-10-12 Thread Tim Freeman
>From: Derek Zahn <[EMAIL PROTECTED]>
>Date: Sun, 30 Sep 2007 08:57:53 -0600
>...
>One thing that could improve safety is to reject the notion that AGI
>projects should be focused on, or even capable of, recursive self
>improvement in the sense of reprogramming its core implementation.
>...
>Let's take Novamente as an example.  ... It cannot improve itself
>until the following things happen:
>
>1) It acquires the knowledge and skills to become a competent
>   programmer, a task that takes a human many years of directed
>   training and practical experience.
> 
>2) It is given access to its own implementation and permission to alter it.
> 
>3) It understands its own implementation well enough to make a helpful change.
>...

I agree that resource #1, competent programming, is essential for any
interesting takeoff scenario.  I don't think the other two matter,
though.

Consider these two scenarios:

Self-improvement: The AI builds a new, improved version of itself,
   turns on the new one, and turns off the old one.

Engineering: The AI builds a better AI and turns it on.

These are essentially the same scenario, except in the latter case the
AI doesn't turn itself off.  If we assume that the new, improved AI
really is improved, we don't care much whether the old one turns
itself off because the new one is now the dominant player and the old
one therefore doesn't matter much.

So giving an AI access to its own source code and permission to alter
it (resource #2 above) is irrelevant.  Giving it understanding of its
own source code is irrelevant too; it's just as good for it to
understand some other actual or potential AI implementation, and
that's subsumed by the competent programming requirement.  No value is
added by introducing considerations about self-reference into
conversations about the consequences of AI engineering.

Junior geeks do find it impressive, though.

-- 
Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52968233-2b8b16


Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread a

Yes, I think that too.

On the practical side, I think that investing in AGI requires 
significant tax cuts, and we should elect a candidate that would do that 
(Ron Paul). I think that the government has to have more respect to 
potential weapons (like AGI), so we should elect a candidate who is 
strongly pro-gun (Ron Paul). I think that the government has to trust 
and respect the privacy of its people, so your would not be forced to 
sell your AGI to the military. No more wiretapping (abolish the Patriot 
Act) so the government won't hear an AGI being successfully developed. 
Abolish the Federal Reserve, so no more malinvestment, and more 
productive investment (including agi investment). Ron Paul will do all 
of that.


JW Johnston wrote:

I also agree except ... I think political and economic theories can inform AGI design, 
particularly in areas of AGI decision making and friendliness/roboethics. I wasn't 
familiar with the theory of Comparative Advantage until Josh and Eric brought it up. 
(Josh discusses in conjunction with friendly AIs in his "The Age of Virtuous 
Machines" at Kurzweil's site.) I like to see discussions in these contexts.

-JW

-Original Message-
  

From: Bob Mottram <[EMAIL PROTECTED]>
Sent: Oct 11, 2007 11:12 AM
To: agi@v2.listbox.com
Subject: Re: [META] Re: Economic libertarianism [was Re: The first-to-market 
effect [WAS Re: [agi] Religion-free technical content]

On 10/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
  

Agreed.  There are many other forums where political ideology can be debated.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52468544-1f3003


Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread JW Johnston
I also agree except ... I think political and economic theories can inform AGI 
design, particularly in areas of AGI decision making and 
friendliness/roboethics. I wasn't familiar with the theory of Comparative 
Advantage until Josh and Eric brought it up. (Josh discusses in conjunction 
with friendly AIs in his "The Age of Virtuous Machines" at Kurzweil's site.) I 
like to see discussions in these contexts.

-JW

-Original Message-
>From: Bob Mottram <[EMAIL PROTECTED]>
>Sent: Oct 11, 2007 11:12 AM
>To: agi@v2.listbox.com
>Subject: Re: [META] Re: Economic libertarianism [was Re: The first-to-market 
>effect [WAS Re: [agi] Religion-free technical content]
>
>On 10/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> Am I the only one, or does anyone else agree that politics/political
>> theorising is not appropriate on the AGI list?
>
>Agreed.  There are many other forums where political ideology can be debated.
>
>-
>This list is sponsored by AGIRI: http://www.agiri.org/email
>To unsubscribe or change your options, please go to:
>http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52436992-ab6eb0


Re: [META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-11 Thread Bob Mottram
On 10/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Am I the only one, or does anyone else agree that politics/political
> theorising is not appropriate on the AGI list?

Agreed.  There are many other forums where political ideology can be debated.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=52314407-5d32b9


[META] Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Richard Loosemore



Am I the only one, or does anyone else agree that politics/political 
theorising is not appropriate on the AGI list?


I particularly object to libertarianism being shoved down our throats, 
not so much because I disagree with it, but because so much of the 
singularity / extropian / futurist discussion universe is dominated by it.



Richard Loosemore




J. Andrew Rogers wrote:


On Oct 10, 2007, at 2:26 AM, Robert Wensman wrote:

Yes, of course, the Really Big Fish that is democracy.



No, you got this quite wrong.  The Really Big Fish is institution 
responsible for governance (usually the "government"); "democracy" is 
merely a fuzzy category of rule set used in governance.



I am starting to get quite puzzled by all Americans (I don't know if 
you are American though, but I want to express this anyway) who 
express severe distrust in government. Because if you distrust all 
forms of government, what you really distrust is democracy itself.



This bias is for good reason; there are well described pathological 
minima that are essentially unavoidable in a democracy.  The American 
government was explicitly designed as a constitutional republic (not a 
democracy) to avoid these pathologies.  In the 20th century the American 
constitution was changed to make it more like a democracy, and the 
expected pathologies have materialized.


If you do not understand this, then the rest of your reasoning is likely 
misplaced.  Much of American libertarian political thought is based on a 
desire to go back to a strict constitutional republic rather than the 
current quasi-democracy, in large part to fix the very real problems 
that quasi-democracy created.  Many of the "bad" things the Federal 
government is currently accused of were enabled by democracy and would 
have been impractical or illegal under a strict constitutional republic.




Here you basically compare democracy to...  whom? The devil!?



Perhaps I should refrain from using literate metaphors in the future, 
since you apparently did not understand it.



My recommendation is to put some faith in the will of the people! When 
you walk on the street and look around you, those are your fellow 
citizen you should feel at least some kind of trust in. They are not 
out to get you!



I'm sure they are all lovely people for the most part, but their poorly 
reasoned good intentions will destroy us all.  The problem is not that 
people are evil, the problem is that humans at large are hopelessly 
ignorant, short-sighted, and irrational even when trying to do good and 
without regard for clearly derivable consequences.



Actually, I believe that the relative stupidity of the population 
could act as a kind of protection against manipulation.



Non sequitur.


Also, the history shows that intelligence is no guarantee for power. 
The Russian revolution and the genocide in Cambodia illustrates 
effectively how intelligent people were slaughtered by apparently less 
intelligent people, and later how they were controlled to the extreme 
for decades.



You are improperly conflating intelligence and rationality.


Cheers,

J. Andrew Rogers



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51972366-e14515


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread J. Andrew Rogers


On Oct 10, 2007, at 2:26 AM, Robert Wensman wrote:

Yes, of course, the Really Big Fish that is democracy.



No, you got this quite wrong.  The Really Big Fish is institution  
responsible for governance (usually the "government"); "democracy" is  
merely a fuzzy category of rule set used in governance.



I am starting to get quite puzzled by all Americans (I don't know  
if you are American though, but I want to express this anyway) who  
express severe distrust in government. Because if you distrust all  
forms of government, what you really distrust is democracy itself.



This bias is for good reason; there are well described pathological  
minima that are essentially unavoidable in a democracy.  The American  
government was explicitly designed as a constitutional republic (not  
a democracy) to avoid these pathologies.  In the 20th century the  
American constitution was changed to make it more like a democracy,  
and the expected pathologies have materialized.


If you do not understand this, then the rest of your reasoning is  
likely misplaced.  Much of American libertarian political thought is  
based on a desire to go back to a strict constitutional republic  
rather than the current quasi-democracy, in large part to fix the  
very real problems that quasi-democracy created.  Many of the "bad"  
things the Federal government is currently accused of were enabled by  
democracy and would have been impractical or illegal under a strict  
constitutional republic.




Here you basically compare democracy to...  whom? The devil!?



Perhaps I should refrain from using literate metaphors in the future,  
since you apparently did not understand it.



My recommendation is to put some faith in the will of the people!  
When you walk on the street and look around you, those are your  
fellow citizen you should feel at least some kind of trust in. They  
are not out to get you!



I'm sure they are all lovely people for the most part, but their  
poorly reasoned good intentions will destroy us all.  The problem is  
not that people are evil, the problem is that humans at large are  
hopelessly ignorant, short-sighted, and irrational even when trying  
to do good and without regard for clearly derivable consequences.



Actually, I believe that the relative stupidity of the population  
could act as a kind of protection against manipulation.



Non sequitur.


Also, the history shows that intelligence is no guarantee for  
power. The Russian revolution and the genocide in Cambodia  
illustrates effectively how intelligent people were slaughtered by  
apparently less intelligent people, and later how they were  
controlled to the extreme for decades.



You are improperly conflating intelligence and rationality.


Cheers,

J. Andrew Rogers



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51970341-6a9d1c


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Eric Baum

BillK> On 10/6/07, a wrote:
>> I am skeptical that economies follow the self-organized criticality
>> behavior.  There aren't any examples. Some would cite the Great
>> Depression, but it was caused by the malinvestment created by
>> Central Banks. e.g. The Federal Reserve System. See the Austrian
>> Business Cycle Theory for details.  In conclusion, economics is a
>> bad analogy with complex systems.
>> 

BillK> My objection to economic libertarianism is that it's not a free
BillK> market. A 'free' market is an impossibility. There will always
BillK> be somebody who is bigger than me or cleverer than me or better
BillK> educated than me, etc. A regulatory environment attempts to
BillK> reduce the victimisation of the weaker members of the
BillK> population and introduces another set of biases to the economy.

This is the same misunderstanding that justifies protectionism among
nations. When nation A (say the US) trades with nation B (say Haiti),
nation A may be able to make every single thing much better and
cheaper than nation B, but it still pays both nation B and nation A to
trade freely, because nation B has a comparative advantage in
something: a comparative advantage being whatever they make
least badly, they can swap with nation A and both nations benefit.

Likewise, Michael Jordan may be much better able to mow his lawn than
whoever he pays to do it, but it still benefits both of them when he
pays the lawn guy and concentrates on basketball.

You benefit greatly by trading with people who are cleverer, better
educated, richer, stronger than you.
The more clever they are then you, the more they have to offer you,
and the more they will pay you for what you have to offer them.

Regulations that restrict your ability to enter into trades with
these people hurt you. They do introduce biases into the economy,
biases that make everybody worse off, particularly the weaker members
of society, except for some special interests that lobby for the
regulations and extract rent from society.

BillK> A free market is just a nice intellectual theory that is of no
BillK> use in the real world.  (Unless you are in the Mafia, of
BillK> course).

BillK> BillK

BillK> - This list is sponsored by AGIRI:
BillK> http://www.agiri.org/email To unsubscribe or change your
BillK> options, please go to:
BillK> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51872796-cb97bb


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-10 Thread Robert Wensman
>
>
>
> The only solution to this problem I ever see suggested is to
> intentionally create a Really Big Fish called the government that can
> effortlessly eat every fish in the pond but promises not to -- to
> prevent the creation of Really Big Fish.  That is quite the Faustian
> bargain to protect yourself from the lesser demons.


Yes, of course, the Really Big Fish that is democracy. I am starting to get
quite puzzled by all Americans (I don't know if you are American though, but
I want to express this anyway) who express severe distrust in government.
Because if you distrust all forms of government, what you really distrust is
democracy itself. Here you basically compare democracy to...  whom? The
devil!? USA is supposed to be the "leading democracy of the world" yeah,
right. But I never hear any people speak so badly of their government, and
in effect the democracy itself. The American idea of liberalism is certainly
not the same thing as democracy. Maybe this is the century when Americans
will find that out.

The American liberal culture was founded when the plains of America appeared
endless, and if you did not like the influential people of a certain area,
you just moved on to virgin grounds and started your own community with your
own rules. But there is no more virgin land in America, and people have
started to accumulate in the cities since long. Liberty does not work quite
so well when people live close and need to depend on each other. That lesson
has been learned in Europe ages ago. My recommendation is to put some faith
in the will of the people! When you walk on the street and look around you,
those are your fellow citizen you should feel at least some kind of trust
in. They are not out to get you!

Then of course the American form of "democracy" is not so excellent, so
maybe there is a reason for the distrust even though sad. On the surface USA
has only two parties which is just one more than China. Sweden is not much
better, but at least we have 7 alive and active parties. But these are
problems that can be solved and are not a reason to give up on democracy.


Generally though, the point that you fail to see is that an AGI can
> just as easily subvert *any* power structure, whether the environment
> is a libertarian free market or an autocratic communist state.  The
> problem has nothing to do with the governance of the economy but the
> fact that the AGI is the single most intelligent actor in the economy
> however you may arrange it.  You can rearrange and change the rules
> as you wish, but any economy where transactions are something other
> than completely random is an economy that can be completely dominated
> by AGI in short order.  The game is exactly the same either way, and
> more rigid economies have much simpler patterns that make them easier
> to manipulate.
>
> Regulating economies to prevent super-intelligent actors from doing
> bad things is rearranging the deck chairs on the Titanic.


I agree that a super intelligent life form could be quite a difficult
adversary. It might be able to manipulate and take over a democratic power
structure also, I would not deny that. Probably it would try to target the
culture of the people, and insert hostile but stealthy memes into the
population. I guess it would also try to gain the trust of people and make
them dependant on it by offering appealing services. Depending on the
economy and regulations, it could also try to obtain direct control over as
much automated production capacity as possible, especially production
capacity that could be used for building weapons.

It is not true like you say, that the economy of a democratic socialist
society has easy patterns that are easy to manipulate. The supreme power in
such a society lies in the democracy, and to manipulate that power you need
to manipulate the whole population. Actually, I believe that the
relative stupidity of the population could act as a kind of protection
against manipulation. I have a son that is one month old, and I would say it
is really difficult to control someone who is so extremely dumb as kids of
that age are.

However, I would not go as far as saying intelligence implies power, saying
that a super intelligent life form by necessity would be able to take over
any given power structure. I remember having this discussion with a friend a
long time ago. The trivial example is if you have a super intelligent AGI
brain in a box in front of you on your desk, and you have a gun. Then you
can take the gun and shoot the box. That proves at least that there is no
implication in the strict logical sense.

But of course the picture gets more complicated if we have an AGI system
that interacts in a social context, where we put different degrees of trust
in it. Apparently the danger increases the more dependant we are on the AGI
systems. But there are methods to protect ourselves. One way is to never
utilize the most intelligent AGI systems directly: For example we could us

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Eliezer S. Yudkowsky

J. Andrew Rogers wrote:


Generally though, the point that you fail to see is that an AGI can  
just as easily subvert *any* power structure, whether the environment  
is a libertarian free market or an autocratic communist state.  The  
problem has nothing to do with the governance of the economy but the  
fact that the AGI is the single most intelligent actor in the economy  
however you may arrange it.  You can rearrange and change the rules  as 
you wish, but any economy where transactions are something other  than 
completely random is an economy that can be completely dominated  by AGI 
in short order.  The game is exactly the same either way, and  more 
rigid economies have much simpler patterns that make them easier  to 
manipulate.


Regulating economies to prevent super-intelligent actors from doing  bad 
things is rearranging the deck chairs on the Titanic.


Succinctly put.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51662113-7b9e18


RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-09 Thread Edward W. Porter
I think IQ tests are an important measure, but they don't measure
everything important.  FDR was not nearly as bright as Richard Nixon, but
he was probably a much better president.

Ed Porter

Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 09, 2007 4:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


With googling, I found that older people has lower IQ
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly
genetic, and the heritability increases with age. Perhaps that older
people do not have much mental stimulation as young people?

IMO, IQ does not measure general intelligence, and does not certainly
measure common sense intelligence. The Bushmen and Pygmy peoples have an
average IQ of 54. (source: http://www.rlynn.co.uk/) These IQs are much
lower than some mentally retarded and down syndrome people, but the
Bushmen and Pygmy peoples act very normal.

Yes, IQ is a sensitive and controversial topic, particularly the racial
differences in IQ.

"my ability to recall things is much worse than it was twenty years ago"
Commonly used culture-free IQ tests, such as Raven Progressive Matrices,
generally measure visualspatial intelligence. It does not measure
crystallized intelligence such as memory recall, but visualspatial fluid
intelligence.

I do not take IQ tests importantly. IQ only measures visualspatial
reasoning, not auditory nor linguistic intelligence. Some mentally
retarded autistic people have extremely high IQs.

Edward W. Porter wrote:
>
> Dear indefinite article,
>
> The Wikipedia entry for "Flynn Effect" suggests -- in agreement with
> your comment in the below post -- that older people (at least those in
> the pre-dementia years) don't get dumber with age relative to their
> younger selves, but rather relative to the increasing intelligence of
> people younger than themselves (and, thus, relative to re-normed IQ
> tests).
>
> Perhaps that is correct, but I can tell you that based on my own
> experience, my ability to recall things is much worse than it was
> twenty years ago. Furthermore, my ability to spend most of three or
> four nights in a row lying bed in most of the night with my head
> buzzing with concepts about an intellectual problem of interest
> without feeling like a total zombiod in the following days has
> substantially declined.
>
> Since most organs of the body diminish in function with age, it would
> be surprising if the brain didn't also.
>
> We live in the age of political correctness where it can be dangerous
> to one’s careers to say anything unfavorable about any large group of
> people, particularly one as powerful as the over 45, who, to a large
> extent, rule the world. (Or even to those in the AARP, which is an
> extremely powerful lobby.) So I don't know how seriously I would take
> the statements that age doesn't affect IQ.
>
> My mother, who had the second highest IQ in her college class, was a
> great one for relaying choice tidbits. She once said that Christiaan
> Barnard, the first doctor to successfully perform a heart transplant,
> once said something to the effect of
>
> “If you think old people look bad from the outside, you
> should see how bad they look from the inside.”
>
> That would presumably also apply to our brains.
>



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51654844-578b6d


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread J. Andrew Rogers


On Oct 9, 2007, at 4:27 AM, Robert Wensman wrote:
This is of course just an illustration and by no means a proof that  
the same thing would occur in a laissez-faire/libertarianism  
economy. Libertarians commonly put blame for monopolies on  
government involvement, and I guess some would object that I  
unfairly compares fish that eat each other with a non-violent  
economy. But lets just say I do not share their relaxed attitude  
towards the potential threat of monopoly, and a bigger fish eating  
a smaller fish do have some similarity to a bigger company  
acquiring a smaller one.



The only solution to this problem I ever see suggested is to  
intentionally create a Really Big Fish called the government that can  
effortlessly eat every fish in the pond but promises not to -- to  
prevent the creation of Really Big Fish.  That is quite the Faustian  
bargain to protect yourself from the lesser demons.



Generally though, the point that you fail to see is that an AGI can  
just as easily subvert *any* power structure, whether the environment  
is a libertarian free market or an autocratic communist state.  The  
problem has nothing to do with the governance of the economy but the  
fact that the AGI is the single most intelligent actor in the economy  
however you may arrange it.  You can rearrange and change the rules  
as you wish, but any economy where transactions are something other  
than completely random is an economy that can be completely dominated  
by AGI in short order.  The game is exactly the same either way, and  
more rigid economies have much simpler patterns that make them easier  
to manipulate.


Regulating economies to prevent super-intelligent actors from doing  
bad things is rearranging the deck chairs on the Titanic.


Cheers,

J. Andrew Rogers





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51651108-c1aa2b


Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-09 Thread a
With googling, I found that older people has lower IQ 
http://www.sciencedaily.com/releases/2006/05/060504082306.htm
IMO, the brain is like a muscle, not an organ. IQ is said to be highly 
genetic, and the heritability increases with age. Perhaps that older 
people do not have much mental stimulation as young people?


IMO, IQ does not measure general intelligence, and does not certainly 
measure common sense intelligence. The Bushmen and Pygmy peoples have an 
average IQ of 54. (source: http://www.rlynn.co.uk/) These IQs are much 
lower than some mentally retarded and down syndrome people, but the 
Bushmen and Pygmy peoples act very normal.


Yes, IQ is a sensitive and controversial topic, particularly the racial 
differences in IQ.


"my ability to recall things is much worse than it was twenty years ago" 
Commonly used culture-free IQ tests, such as Raven Progressive Matrices, 
generally measure visualspatial intelligence. It does not measure 
crystallized intelligence such as memory recall, but visualspatial fluid 
intelligence.


I do not take IQ tests importantly. IQ only measures visualspatial 
reasoning, not auditory nor linguistic intelligence. Some mentally 
retarded autistic people have extremely high IQs.


Edward W. Porter wrote:


Dear indefinite article,

The Wikipedia entry for "Flynn Effect" suggests -- in agreement with 
your comment in the below post -- that older people (at least those in 
the pre-dementia years) don't get dumber with age relative to their 
younger selves, but rather relative to the increasing intelligence of 
people younger than themselves (and, thus, relative to re-normed IQ 
tests).


Perhaps that is correct, but I can tell you that based on my own 
experience, my ability to recall things is much worse than it was 
twenty years ago. Furthermore, my ability to spend most of three or 
four nights in a row lying bed in most of the night with my head 
buzzing with concepts about an intellectual problem of interest 
without feeling like a total zombiod in the following days has 
substantially declined.


Since most organs of the body diminish in function with age, it would 
be surprising if the brain didn't also.


We live in the age of political correctness where it can be dangerous 
to one’s careers to say anything unfavorable about any large group of 
people, particularly one as powerful as the over 45, who, to a large 
extent, rule the world. (Or even to those in the AARP, which is an 
extremely powerful lobby.) So I don't know how seriously I would take 
the statements that age doesn't affect IQ.


My mother, who had the second highest IQ in her college class, was a 
great one for relaying choice tidbits. She once said that Christiaan 
Barnard, the first doctor to successfully perform a heart transplant, 
once said something to the effect of


“If you think old people look bad from the outside, you
should see how bad they look from the inside.”

That would presumably also apply to our brains.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51643900-66a52b


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-09 Thread Robert Wensman
(off topic, but there are something relevant for AGI)

My fears about economical libertarianism could be illustrated with a "fish
pond analogy". If there is a small pond with a large number of small fish of
some predatory species, after an amount of time they will cannibalize and
eat each other until at the end there will just remain one very very fat
fish. The instability occurs because a fish that already has managed to eat
a peer, becomes slightly larger than the rest of the fish, and therefore has
a better position in continuing to eat more fish, thus its progress can
accelerate. Maybe if the pond is big enough, a handful of very big fish
would remain.

This is of course just an illustration and by no means a proof that the same
thing would occur in a laissez-faire/libertarianism economy. Libertarians
commonly put blame for monopolies on government involvement, and I guess
some would object that I unfairly compares fish that eat each other with a
non-violent economy. But lets just say I do not share their relaxed attitude
towards the potential threat of monopoly, and a bigger fish eating a smaller
fish do have some similarity to a bigger company acquiring a smaller one.

First of all, the consequence of monopoly is so serious that even if the
chance is very slight, there is a strong incentive to try to prevent it from
ever happening. But there are also a lot of details to suggest that a
laissez-faire economy would collapse into monopoly/oligopoly. Effects of
synergy and mass production benefits would be one strong reason why a
completely free market would benefit those companies that are already large,
which could make them grow larger.

*Especially when considering AGI and intelligence enhancement I believe a
libertarian market could be even more unstable. In such a setting, the rich
could literally invest in more intelligence, that would make them even more
rich, creating a positive economic feedback loop. A dangerous accelerating
scenario where the intelligence explosion could co-occur with the rise of
world monopoly. We could call it an "AGI induced monopoly explosion". Unless
democracy could challenge such a libertarian market, only a few oligarchs
might have the position to decide the fate of mankind, if they could control
their AGI that is. Although it is just one possible scenario.*

A documentary I saw claimed that Russia was converted to something very
close to a laissez-faire market in the years after the Soviet Union
collapse. However I don't have any specific details about it, such as
exactly how free the market of that period was. But apparently it caused
chaos and gave rise to a brutal economy with oligarchs controlling the
society. [
http://en.wikipedia.org/wiki/The_Trap_(television_documentary_series)].
Studying what happened in Russia after the fall of communism could give some
insight on the topic.

/R


2007/10/8, Bob Mottram <[EMAIL PROTECTED]>:
>
> Economic libertarianism would be nice if it were to occur.  However,
> in practice companies and governments put in place all sorts of
> anti-competitive structures to lock people into certain modes of
> economic activity.  I think economic activity in general is heavily
> influenced by cognitive biases of various kinds.
>
>
> On 06/10/2007, BillK < [EMAIL PROTECTED]> wrote:
> > On 10/6/07, a wrote:
> > A free market is just a nice intellectual theory that is of no use in
> > the real world.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51384923-1d1de1

RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-08 Thread Edward W. Porter
Dear indefinite article,

The Wikipedia entry for "Flynn Effect" suggests --  in agreement with your
comment in the below post --  that older people (at least those in the
pre-dementia years) don't get dumber with age relative to their younger
selves, but rather relative to the increasing intelligence of people
younger than themselves (and, thus, relative to re-normed IQ tests).

Perhaps that is correct, but I can tell you that based on my own
experience, my ability to recall things is much worse than it was twenty
years ago.  Furthermore, my ability to spend most of three or four nights
in a row lying bed in most of the night with my head buzzing with concepts
about an intellectual problem of interest without feeling like a total
zombiod in the following days has substantially declined.

Since most organs of the body diminish in function with age, it would be
surprising if the brain didn't also.

We live in the age of political correctness where it can be dangerous to
one’s careers to say anything unfavorable about any large group of people,
particularly one as powerful as the over 45, who, to a large extent, rule
the world.  (Or even to those in the AARP, which is an extremely powerful
lobby.)  So I don't know how seriously I would take the statements that
age doesn't affect IQ.

My mother, who had the second highest IQ in her college class, was a great
one for relaying choice tidbits.  She once said that Christiaan Barnard,
the first doctor to successfully perform a heart transplant, once said
something to the effect of

“If you think old people look bad from the outside, you
should see how bad they look from the inside.”

That would presumably also apply to our brains.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: a [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 06, 2007 10:00 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


Edward W. Porter wrote:
> It's also because the average person looses 10 points in IQ between
> mid twenties and mid fourties and another ten points between mid
> fourties and sixty.  (Help! I'am 59.)
>
> But this is just the average.  Some people hang on to their marbles as
> they age better than others.  And knowledge gained with age can, to
> some extent, compensate for less raw computational power.
>
> The book in which I read this said they age norm IQ tests (presumably
> to keep from offending the people older than mid-forties who
> presumably largely control most of society's institutions, including
> the purchase of IQ tests.)
>
>
I disagree with your theory. I primarily see the IQ drop as a  result of
the Flynn effect, not the age.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51303117-b7930f

Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Linas Vepstas
On Sat, Oct 06, 2007 at 10:05:28AM -0400, a wrote:
> I am skeptical that economies follow the self-organized criticality 
> behavior.

Oh. Well, I thought this was a basic principle, commonly cited in
microeconomics textbooks: when there's a demand, producers rush 
to fill the demand. When there's insufficient demand, producers 
go out of business. Etc.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51229271-e939ba


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread a

Bob Mottram wrote:

Economic libertarianism would be nice if it were to occur.  However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity.  I think economic activity in general is heavily
influenced by cognitive biases of various kinds.


On 06/10/2007, BillK <[EMAIL PROTECTED]> wrote:
  

On 10/6/07, a wrote:
A free market is just a nice intellectual theory that is of no use in
the real world.

No. Not true. Anti-competitive structures and monopolies won't exist in 
a true free market society. The free market is self sustaining. It's 
government regulation that creates monopolies, because companies 
partner-up with the government. See Chicago school of economics and 
Austrian school of economics for explanations. Monopolies are much less 
likely to exist if there is a smaller government.


As a response to "anti-competitive structures to lock people". Microsoft 
is a government-supported monopoly. It got its monopoly from the use of 
software patents. Microsoft patented its file formats, APIs, etc., which 
resulted in vendor lock-ins. Patent offices, like all bureaucratic 
agencies, are poor in quality, so lots of trivial ideas can be patented. 
Do not misinterpret me, I am not against software patents. This is out 
of topic, but I am in a habit of writing defenses.


References
http://www.mises.org/story/2317
http://www.cato.org/pubs/regulation/regv12n2/reg12n2-debow.html
http://www.ruwart.com/Healing/rutoc.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51227227-be874f


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Charles D Hixson

a wrote:

Linas Vepstas wrote:


...
The issue is that there's no safety net protecting against avalanches 
of unbounded size. The other issue is that its not grains of sand, its
people.  My bank-account and my brains can insulate me from small 
shocks.
I'd like to have protection against the bigger forces that can wipe 
me out.
I am skeptical that economies follow the self-organized criticality 
behavior.
There aren't any examples. Some would cite the Great Depression, but 
it was caused by the malinvestment created by Central Banks. e.g. The 
Federal Reserve System. See the Austrian Business Cycle Theory for 
details.

In conclusion, economics is a bad analogy with complex systems.
OK.  I'm skeptical that a Free-Market economy has ever existed.  
Possibly the agora of ancient Greece came close.  The Persians though 
so: "Who are these people who have special places where they go to cheat 
each other?"  However I suspect that a closer look would show that 
these, also, were regulated to some degree by an external power.  (E.g., 
threat of force from the government if the customers rioted.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51221478-ab187a


Re: [agi] Religion-free technical content

2007-10-08 Thread Charles D Hixson

Derek Zahn wrote:

Richard Loosemore:

> > a...
I often see it assumed that the step between "first AGI is built" 
(which I interpret as a functoning model showing some degree of 
generally-intelligent behavior) and "god-like powers dominating the 
planet" is a short one.  Is that really likely?
Nobody knows the answer to that one.  The sooner it is built, the less 
likely it is to be true.  As more accessible computing resources become 
available, hard takeoff becomes more likely.


Note that this isn't a quantitative answer.  It can't be.  Nobody really 
knows how much computing power is necessary for a AGI.  In one scenario, 
it would see the internet as it's body, and wouldn't even realize that 
people existed until very late in the process.  This is probably one of 
the scenarios that require least computing power for takeoff, and allow 
for fastest spread.  Unfortunately, it's also not very likely to be a 
friendly AI.  It would likely feel about people as we feel about the 
bacteria that make our yogurt.  They can be useful to have around, but 
they're certainly not one's social equals.  (This mode of AI might well 
be social, if, say, it got socialized on chat-lines and newsgroups.  But 
deriving the existence and importance of bodies from those interactions 
isn't a trivial problem.)


The easiest answer isn't necessarily the best one.  (Also note that this 
mode of AI could very likely be developed by a govt. as a weapon for 
cyber-warfare.  Discovering that it was a two-edged sword with a mind of 
it's own could be a very late-stage event.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51216209-9c2b04


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-08 Thread Bob Mottram
Economic libertarianism would be nice if it were to occur.  However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity.  I think economic activity in general is heavily
influenced by cognitive biases of various kinds.


On 06/10/2007, BillK <[EMAIL PROTECTED]> wrote:
> On 10/6/07, a wrote:
> A free market is just a nice intellectual theory that is of no use in
> the real world.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=51046640-b84781


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread BillK
On 10/6/07, a wrote:
> I am skeptical that economies follow the self-organized criticality
> behavior.
> There aren't any examples. Some would cite the Great Depression, but it
> was caused by the malinvestment created by Central Banks. e.g. The
> Federal Reserve System. See the Austrian Business Cycle Theory for details.
> In conclusion, economics is a bad analogy with complex systems.
>

My objection to economic libertarianism is that it's not a free
market. A 'free' market is an impossibility. There will always be
somebody who is bigger than me or cleverer than me or better educated
than me, etc. A regulatory environment attempts to reduce the
victimisation of the weaker members of the population and introduces
another set of biases to the economy.

A free market is just a nice intellectual theory that is of no use in
the real world.
(Unless you are in the Mafia, of course).

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50792589-4d8a77


Re: Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-06 Thread a

Linas Vepstas wrote:


My objection to economic libertarianism is its lack of discussion of
"self-organized criticality".  A common example of self-organized
criticality is a sand-pile at the critical point.  Adding one grain
of sand can trigger an avalanche, which can be small, or maybe
(unboundedly) large. Despite avalanches, a sand-pile will maintain its 
critical shape (a cone at some angle).


The concern is that a self-organized economy is almost by definition 
always operating at the critical point, sloughing off excess production,

encouraging new demand, etc. Small or even medium-sized re-organizations
of the economy are good for it: it maintains the economy at its critical
shape, its free-market-optimal shape. Nothing wrong with that free-market
optimal shape, most everyone agrees.

The issue is that there's no safety net protecting against avalanches 
of unbounded size. The other issue is that its not grains of sand, its

people.  My bank-account and my brains can insulate me from small shocks.
I'd like to have protection against the bigger forces that can wipe me 
out.
I am skeptical that economies follow the self-organized criticality 
behavior.
There aren't any examples. Some would cite the Great Depression, but it 
was caused by the malinvestment created by Central Banks. e.g. The 
Federal Reserve System. See the Austrian Business Cycle Theory for details.

In conclusion, economics is a bad analogy with complex systems.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50774944-955341


Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-06 Thread a

Edward W. Porter wrote:

It's also because the average person looses 10 points in IQ between mid
twenties and mid fourties and another ten points between mid fourties and
sixty.  (Help! I'am 59.)  


But this is just the average.  Some people hang on to their marbles as
they age better than others.  And knowledge gained with age can, to some
extent, compensate for less raw computational power.  


The book in which I read this said they age norm IQ tests (presumably to
keep from offending the people older than mid-forties who presumably
largely control most of society's institutions, including the purchase of
IQ tests.)

  
I disagree with your theory. I primarily see the IQ drop as a  result of 
the Flynn effect, not the age.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50774160-ad0d02


RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-05 Thread Edward W. Porter
It's also because the average person looses 10 points in IQ between mid
twenties and mid fourties and another ten points between mid fourties and
sixty.  (Help! I'am 59.)  

But this is just the average.  Some people hang on to their marbles as
they age better than others.  And knowledge gained with age can, to some
extent, compensate for less raw computational power.  

The book in which I read this said they age norm IQ tests (presumably to
keep from offending the people older than mid-forties who presumably
largely control most of society's institutions, including the purchase of
IQ tests.)

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 05, 2007 7:31 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content & breaking the small
hardware mindset


On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
> the
> IQ bell curve is not going down.  The evidence is its going up.

So that's why us old folks 'r gettin' stupider as compared to 
them's young'uns.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50724257-8e390c


Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
OK, this is very off-topic. Sorry.

On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote:
> Linas Vepstas wrote:
> >For the most part, modern western culture espouses and hews to 
> >physical non-violence. However, modern right-leaning "pure" capitalism
> >advocates not only social Darwinism, but also the economic equivalent
> >of rape and murder -- a jungle ethic where only the fittest survive,
> >while thousands can loose jobs, income, housing, etc. thanks to the
> >"natural forces of capitalism".
> >  
> This, anyway, is a common misunderstanding of capitalism.  I suggest you 
> to read more about economic libertarianism.

My objection to economic libertarianism is its lack of discussion of
"self-organized criticality".  A common example of self-organized
criticality is a sand-pile at the critical point.  Adding one grain
of sand can trigger an avalanche, which can be small, or maybe
(unboundedly) large. Despite avalanches, a sand-pile will maintain its 
critical shape (a cone at some angle).

The concern is that a self-organized economy is almost by definition 
always operating at the critical point, sloughing off excess production,
encouraging new demand, etc. Small or even medium-sized re-organizations
of the economy are good for it: it maintains the economy at its critical
shape, its free-market-optimal shape. Nothing wrong with that free-market
optimal shape, most everyone agrees.

The issue is that there's no safety net protecting against avalanches 
of unbounded size. The other issue is that its not grains of sand, its
people.  My bank-account and my brains can insulate me from small shocks.
I'd like to have protection against the bigger forces that can wipe me 
out.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50672693-e11dc1


Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-05 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
> the
> IQ bell curve is not going down.  The evidence is its going up.  

So that's why us old folks 'r gettin' stupider as compared to 
them's young'uns.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50669278-fabe77


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread a

Linas Vepstas wrote:

On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
  
As to exactly how, I don't know, but since the AGI is, by assumption, 
peaceful, friendly and non-violent, it will do it in a peaceful, 
friendly and non-violent manner.



I like to think of myself as "peaceful and non-violent", but others
have occasionally challenged my self-image.

I have also know folks who are physically non-violent, and yet are
emotionally controlling monsters.

For the most part, modern western culture espouses and hews to 
physical non-violence. However, modern right-leaning "pure" capitalism

advocates not only social Darwinism, but also the economic equivalent
of rape and murder -- a jungle ethic where only the fittest survive,
while thousands can loose jobs, income, housing, etc. thanks to the
"natural forces of capitalism".
  
This, anyway, is a common misunderstanding of capitalism.  I suggest you 
to read more about economic libertarianism.

So.. will a "friendly AI" also be a radical left-wing economic socialist ??
  
Yes, if you define it to be. "friendly AI" would get the best of both 
utopian socialism and capitalism. It would get the anti-coercive nature 
of capitalism and the utopia of utopian socialism.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50659417-dd373e


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
> 
> As to exactly how, I don't know, but since the AGI is, by assumption, 
> peaceful, friendly and non-violent, it will do it in a peaceful, 
> friendly and non-violent manner.

I like to think of myself as "peaceful and non-violent", but others
have occasionally challenged my self-image.

I have also know folks who are physically non-violent, and yet are
emotionally controlling monsters.

For the most part, modern western culture espouses and hews to 
physical non-violence. However, modern right-leaning "pure" capitalism
advocates not only social Darwinism, but also the economic equivalent
of rape and murder -- a jungle ethic where only the fittest survive,
while thousands can loose jobs, income, housing, etc. thanks to the
"natural forces of capitalism".

So.. will a "friendly AI" also be a radical left-wing economic socialist ??

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50633201-155b36


Re: [agi] Religion-free technical content

2007-10-05 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 03:03:35PM -0400, Mark Waser wrote:
> >Do you really think you can show an example of a true moral universal?
> 
> Thou shalt not destroy the universe.
> Thou shalt not kill every living and/or sentient being including yourself.
> Thou shalt not kill every living and/or sentient except yourself.

What if you discover a sub-stratum alternate-universe thingy that
you beleive will be better, but it requires the destruction of this
universe to create? What if you discover that there is a god, and
that this universe is a kind of cancer or illness in god?

(Disclaimer: I did not come up with this; its from some sci-fi 
book I read as a teen.)

Whoops.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50615643-d29c68


Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney

--- Mike Dougherty <[EMAIL PROTECTED]> wrote:

> On 10/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > > Then I guess we are in perfect agreement.  Friendliness is what the
> > > average
> > > person would do.
> >
> > Which one of the words in "And not my proposal" wasn't clear?  As far as I
> > am concerned, friendliness is emphatically not what the average person
> would
> > do.
> 
> Yeah - Computers already do what the average person would:  wait
> expectantly to be told exactly what to do and how to behave.  I guess
> it's a question of how cynically we define the average person.

Now you all know damn well what I was trying to say.  I thought only computers
were supposed to have this problem.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50580206-f3a97b


Re: [agi] Religion-free technical content

2007-10-05 Thread Mike Dougherty
On 10/5/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > Then I guess we are in perfect agreement.  Friendliness is what the
> > average
> > person would do.
>
> Which one of the words in "And not my proposal" wasn't clear?  As far as I
> am concerned, friendliness is emphatically not what the average person would
> do.

Yeah - Computers already do what the average person would:  wait
expectantly to be told exactly what to do and how to behave.  I guess
it's a question of how cynically we define the average person.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50390046-8654d8


Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
Then I guess we are in perfect agreement.  Friendliness is what the 
average

person would do.


Which one of the words in "And not my proposal" wasn't clear?  As far as I 
am concerned, friendliness is emphatically not what the average person would 
do.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Friday, October 05, 2007 10:40 AM
Subject: **SPAM** Re: [agi] Religion-free technical content




--- Mark Waser <[EMAIL PROTECTED]> wrote:


> Then state the base principles or the algorithm that generates them,
> without
> ambiguity and without appealing to common sense.  Otherwise I have to
> believe
> they are complex too.

Existence proof to disprove your "I have to believe . . . . "

1.  Magically collect all members of the species.
2.  Magically fully inform them of all relevant details.
3.  Magically force them to select moral/ethical/friendly, neutral, or
immoral/unethical/unfriendly.
4.  If 50% or less select immoral/unethical/unfriendly, then it's 
friendly.

If >50% select immoral/unethical/unfriendly, then it's unfriendly.

Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)


Then I guess we are in perfect agreement.  Friendliness is what the 
average

person would do.  So how *would* you implement it?



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content


> --- Mark Waser <[EMAIL PROTECTED]> wrote:
>> I'll repeat again since you don't seem to be paying attention to what 
>> I'm

>> saying -- "The determination of whether a given action is friendly or
>> ethical or not is certainly complicated but the base principles are
>> actually
>> pretty darn simple."
>
> Then state the base principles or the algorithm that generates them,
> without
> ambiguity and without appealing to common sense.  Otherwise I have to
> believe
> they are complex too.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50386329-c4a01e


Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney

--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > Then state the base principles or the algorithm that generates them, 
> > without
> > ambiguity and without appealing to common sense.  Otherwise I have to 
> > believe
> > they are complex too.
> 
> Existence proof to disprove your "I have to believe . . . . "
> 
> 1.  Magically collect all members of the species.
> 2.  Magically fully inform them of all relevant details.
> 3.  Magically force them to select moral/ethical/friendly, neutral, or 
> immoral/unethical/unfriendly.
> 4.  If 50% or less select immoral/unethical/unfriendly, then it's friendly. 
> If >50% select immoral/unethical/unfriendly, then it's unfriendly.
> 
> Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)

Then I guess we are in perfect agreement.  Friendliness is what the average
person would do.  So how *would* you implement it?

> 
> - Original Message - 
> From: "Matt Mahoney" <[EMAIL PROTECTED]>
> To: 
> Sent: Thursday, October 04, 2007 7:26 PM
> Subject: **SPAM** Re: [agi] Religion-free technical content
> 
> 
> > --- Mark Waser <[EMAIL PROTECTED]> wrote:
> >> I'll repeat again since you don't seem to be paying attention to what I'm
> >> saying -- "The determination of whether a given action is friendly or
> >> ethical or not is certainly complicated but the base principles are 
> >> actually
> >> pretty darn simple."
> >
> > Then state the base principles or the algorithm that generates them, 
> > without
> > ambiguity and without appealing to common sense.  Otherwise I have to 
> > believe
> > they are complex too.
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> > 
> 
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50375599-b488f1


Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
Then state the base principles or the algorithm that generates them, 
without
ambiguity and without appealing to common sense.  Otherwise I have to 
believe

they are complex too.


Existence proof to disprove your "I have to believe . . . . "

1.  Magically collect all members of the species.
2.  Magically fully inform them of all relevant details.
3.  Magically force them to select moral/ethical/friendly, neutral, or 
immoral/unethical/unfriendly.
4.  If 50% or less select immoral/unethical/unfriendly, then it's friendly. 
If >50% select immoral/unethical/unfriendly, then it's unfriendly.


Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)

- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



--- Mark Waser <[EMAIL PROTECTED]> wrote:

I'll repeat again since you don't seem to be paying attention to what I'm
saying -- "The determination of whether a given action is friendly or
ethical or not is certainly complicated but the base principles are 
actually

pretty darn simple."


Then state the base principles or the algorithm that generates them, 
without
ambiguity and without appealing to common sense.  Otherwise I have to 
believe

they are complex too.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50329295-47e942


Re: [agi] Religion-free technical content

2007-10-04 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote:
> I'll repeat again since you don't seem to be paying attention to what I'm 
> saying -- "The determination of whether a given action is friendly or 
> ethical or not is certainly complicated but the base principles are actually
> pretty darn simple." 

Then state the base principles or the algorithm that generates them, without
ambiguity and without appealing to common sense.  Otherwise I have to believe
they are complex too.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50198284-daaf75


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Vladimir Nesov
On 10/4/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> We can't build a system that learns as fast as a 1-year-old just now. Which is
> our most likely next step: (a) A system that does learn like a 1-year-old, or
> (b) a system that can learn 1000 times as fast as an adult?
>
> Following Moore's law and its software cognates, I'd say give me the former
> and I'll give you the latter in a decade. With lots of hard work. Then and
> only then will you have something that's able to improve itself faster than a
> high-end team of human researchers and developers could.

You can't know that if you don't have any working algorithm or theory
that's able to predict required computational power... At least there
is no reason to expect that when AGI is implemented, hardware existing
at that time is going to provide it with exactly the resources of
human being. It might be that hardware will not be ready, or that
it'll be enough to run it 1000 faster. So, in event that hardware will
be sufficient, it will probably be enough for AGI to run much faster
than human.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50126912-aa8f49


Re: [agi] Religion-free technical content

2007-10-04 Thread Matt Mahoney

--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > I mean that ethics or friendliness is an algorithmically complex function,
> > like our legal system.  It can't be simplified.
> 
> The determination of whether a given action is friendly or ethical or not is
> certainly complicated but the base principles are actually pretty darn
> simple.

If the base principles are to not harm humans or to do what humans tell them,
then these are not simple.  We both know what that means but we need common
sense to do so.  Common sense is not algorithmically simple.  Ask Doug Lenat.


> > However, I don't believe that friendliness can be made stable through RSI.
>  
> 
> Your wording is a bit unclear here.  RSI really has nothing to do with
> friendliness other than the fact that RSI makes the machine smarter and the
> machine then being smarter *might* have any of the consequences of:
>   1.. understanding friendliness better
>   2.. evaluating whether something is friendly better
>   3.. convincing the machine that friendliness should only apply to the most
> evolved life-form (something that this less-evolved life-form sees as
> patently ridiculous)
> I'm assuming that you mean you believe that friendliness can't be made
> stable under improving intelligence.  I believe that you're wrong.

I mean that an initially friendly AGI might not be friendly after RSI.  If the
child AGI is more intelligent, then the parent will not be able to fully
evaluate its friendliness.

> > We
> > can summarize the function's decision process as "what would the average
> human
> > do in this situation?"
> 
> That's not an accurate summary as far as I'm concerned.  I don't want
> *average* human judgement.  I want better.

Who decides what is "better"?  If the AGI decides, then future generations
will certainly be better -- by its definition.

> I suspect that our best current instincts are fairly close to friendliness. 

Our current instincts allow war, crime, torture, and genocide.  Future
generations will look back at us as barbaric.  Do you want to freeze the AGI's
model of ethics at its current level, or let it drift in a manner beyond our
control. 

> > Second, as I mentioned before, RSI is necessarily experimental, and
> therefore
> > evolutionary, and the only stable goal in an evolutionary process is rapid
> > reproduction and acquisition of resources.  
> 
> I disagree strongly.  Experimental only implies a weak meaning of the term
> evolutionary and your assertion that the only stable goal in a evolutionary
> process is rapid reproduction and acquisition of resources may apply to the
> most obvious case of animal evolution but it certainly doesn't apply to
> numerous evolutionary process that scientists perform all the time (For
> example, when scientists are trying to evolve a protein that binds to a
> certain receptor.  In that case, the stable goal is binding strength and
> nothing else since the scientists then provide the reproduction for the best
> goal-seekers).

That only works because a more intelligent entity (the scientists) controls
the goal.  RSI is uncontrolled, just like biological evolution.

> So you don't believe that humans will self-improve?  You don't believe that
> humans will be able to provide something that the AGI might value?  You
> don't believe that a friendly AGI would be willing not to hog *all* the
> resources.  Personally, I think that the worst case with a friendly AGI is
> that we would end up as pampered pets until we could find a way to free
> ourselves of our biology.

AGI may well produce a utopian society.  But as you say, there might not be
anything in it that resembles human life.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50018754-0d8000


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also reaching RSI capability.  

Hear, hear and again I say hear, hear!

There's a lot of "and then a miracle occurs in step 2" in the "we build a 
friendly AI and it takes over the world and saves our asses" type reasoning 
we see so much of. (Or the "somebody builds an unfriendly AI and it takes 
over the world and wipes us out" reasoning as well.)

We can't build a system that learns as fast as a 1-year-old just now. Which is 
our most likely next step: (a) A system that does learn like a 1-year-old, or 
(b) a system that can learn 1000 times as fast as an adult?

Following Moore's law and its software cognates, I'd say give me the former 
and I'll give you the latter in a decade. With lots of hard work. Then and 
only then will you have something that's able to improve itself faster than a 
high-end team of human researchers and developers could. 

Furthermore, there's a natural plateau waiting for it. That's where it has to 
leave off learning by absorbing knowledge fom humans (reading textbooks and 
research papers, etc) and doing the actual science itself. 

I have heard NO ONE give an argument that puts a serious dent in this, to my 
way of thinking.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50014668-f60c12


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread BillK
On 10/4/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also reaching RSI capability.  Presumably
> "lesser techniques" means black hat activity, or traditional forms of
> despotism.  There seems to be a clarity gap in the theory here.
>


The first true AGI may be friendly, as suggested by Richard Loosemore.
But if the military are working on developing an intelligent weapons
system, then a sub-project will be a narrow AI project designed
specifically to seek out and attack the competition *before* it
becomes a true AGI.  The Chinese are already constantly probing and
attacking the western internet sites.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49977621-104d4e


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also reaching RSI capability.  Presumably
"lesser techniques" means black hat activity, or traditional forms of
despotism.  There seems to be a clarity gap in the theory here.



On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Bob Mottram wrote:
> > On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >> As to exactly how, I don't know, but since the AGI is, by assumption,
> >> peaceful, friendly and non-violent, it will do it in a peaceful,
> >> friendly and non-violent manner.
> >
> > This seems very vague.  I would suggest that if there is no clear
> > mechanism for stopping someone from developing an AGI then such
> > enforcement will not occur in practice.
>
> Oh, in my haste I forgot to remind you that this assumes RSI:  the first
> AGI to be built will undoubtedly be used to design faster and possibly
> smarter forms of AGI, in a rapidly escalating process.
>
> It is only under *those* assumptions that it will take action.  It will
> invent techniques to handle the situation.  Nanotech, probably, but
> perhaps lesser techniques will do.
>
> Clearly, at the base level it will not be able to do anything, if it
> doesn't have access to any effectors.
>
>
> Richard Loosemore
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49857713-6526fe


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore

Bob Mottram wrote:

On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:

As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.


This seems very vague.  I would suggest that if there is no clear
mechanism for stopping someone from developing an AGI then such
enforcement will not occur in practice.


Oh, in my haste I forgot to remind you that this assumes RSI:  the first 
AGI to be built will undoubtedly be used to design faster and possibly 
smarter forms of AGI, in a rapidly escalating process.


It is only under *those* assumptions that it will take action.  It will 
invent techniques to handle the situation.  Nanotech, probably, but 
perhaps lesser techniques will do.


Clearly, at the base level it will not be able to do anything, if it 
doesn't have access to any effectors.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49776231-a7ad14


Re: [agi] Religion-free technical content

2007-10-04 Thread Mark Waser
> I mean that ethics or friendliness is an algorithmically complex function,
> like our legal system.  It can't be simplified.

The determination of whether a given action is friendly or ethical or not is 
certainly complicated but the base principles are actually pretty darn simple.

> However, I don't believe that friendliness can be made stable through RSI.  

Your wording is a bit unclear here.  RSI really has nothing to do with 
friendliness other than the fact that RSI makes the machine smarter and the 
machine then being smarter *might* have any of the consequences of:
  1.. understanding friendliness better
  2.. evaluating whether something is friendly better
  3.. convincing the machine that friendliness should only apply to the most 
evolved life-form (something that this less-evolved life-form sees as patently 
ridiculous)
I'm assuming that you mean you believe that friendliness can't be made stable 
under improving intelligence.  I believe that you're wrong.

> We
> can summarize the function's decision process as "what would the average human
> do in this situation?"

That's not an accurate summary as far as I'm concerned.  I don't want *average* 
human judgement.  I want better.

> The function therefore has to be
> modifiable because human ethics changes over time, e.g. attitudes toward the
> rights of homosexuals, the morality of slavery, or whether hanging or
> crucifixion is an appropriate form of punishment.

I suspect that our best current instincts are fairly close to friendliness.  
Humans started out seriously unfriendly because friendly entities *don't* 
survive in an environment populated only by unfriendlies.  As society grows and 
each individual becomes friendlier, it's an upward spiral to where we need/want 
to be.  I think that the top of the spiral (i.e. the base principles) is pretty 
obvious.  I think that the primary difficulties are determining all the cases 
where we're constrained by circumstances and what won't work yet and can't 
determine what is best.

> Second, as I mentioned before, RSI is necessarily experimental, and therefore
> evolutionary, and the only stable goal in an evolutionary process is rapid
> reproduction and acquisition of resources.  

I disagree strongly.  Experimental only implies a weak meaning of the term 
evolutionary and your assertion that the only stable goal in a evolutionary 
process is rapid reproduction and acquisition of resources may apply to the 
most obvious case of animal evolution but it certainly doesn't apply to 
numerous evolutionary process that scientists perform all the time (For 
example, when scientists are trying to evolve a protein that binds to a certain 
receptor.  In that case, the stable goal is binding strength and nothing else 
since the scientists then provide the reproduction for the best goal-seekers).

> But as AGI grows more powerful, humans
> will be less significant and more like a lower species that competes for
> resources.

So you don't believe that humans will self-improve?  You don't believe that 
humans will be able to provide something that the AGI might value?  You don't 
believe that a friendly AGI would be willing not to hog *all* the resources.  
Personally, I think that the worst case with a friendly AGI is that we would 
end up as pampered pets until we could find a way to free ourselves of our 
biology.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49672841-ba128c

Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> As to exactly how, I don't know, but since the AGI is, by assumption,
> peaceful, friendly and non-violent, it will do it in a peaceful,
> friendly and non-violent manner.

This seems very vague.  I would suggest that if there is no clear
mechanism for stopping someone from developing an AGI then such
enforcement will not occur in practice.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49661340-6aec9f


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore

Bob Mottram wrote:

On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Linas Vepstas wrote:

Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.




What I mean is that ASSUMING the first one is friendly (that assumption
being based on a completely separate line of argument), THEN it will be
obliged, because of its commitment to friendliness, to immediately
search the world for dangerous AGI projects and quietly ensure that none
of them are going to become a danger to humanity.



Whether you call it "extermination" or "ensuring they won't be a
danger" the end result seems like the same thing.  In the world of
realistic software development how is it proposed that this kind of
neutralisation (or "termination" if you prefer) should occur ?  Are we
talking about black hat type activity here, or agents of the state
breaking down doors and seizing computers?


Well, forgive me, but do you notice that you are always trying to bring 
it back to language that implies malevolence?


It is this very implication of malevolent intent that I am saying is 
unjustified, because it makes it seem like it is something that it is not.


As to exactly how, I don't know, but since the AGI is, by assumption, 
peaceful, friendly and non-violent, it will do it in a peaceful, 
friendly and non-violent manner.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49658671-fcb107


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Bob Mottram
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Linas Vepstas wrote:
> > Um, why, exactly, are you assuming that the first one will be freindly?
> > The desire for self-preservation, by e.g. rooting out and exterminating
> > all (potentially unfreindly) competing AGI, would not be what I'd call
> > "freindly" behavior.


> What I mean is that ASSUMING the first one is friendly (that assumption
> being based on a completely separate line of argument), THEN it will be
> obliged, because of its commitment to friendliness, to immediately
> search the world for dangerous AGI projects and quietly ensure that none
> of them are going to become a danger to humanity.


Whether you call it "extermination" or "ensuring they won't be a
danger" the end result seems like the same thing.  In the world of
realistic software development how is it proposed that this kind of
neutralisation (or "termination" if you prefer) should occur ?  Are we
talking about black hat type activity here, or agents of the state
breaking down doors and seizing computers?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49651474-cd3887


Small amounts of Complexity [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore

Linas Vepstas wrote:

On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
Second, You mention the 3-body problem in Newtonian mechanics.  Although 
I did not use it as such in the paper, this is my poster child of a 
partial complex system.  I often cite the case of planetary system 
dynamics as an example of a real physical system that is PARTIALLY 
complex, because it is mostly governed by regular dynamics (which lets 
us predict solar eclipse precisely), but also has various minor aspects 
that are complex, such as Pluto's orbit, braiding effects in planetary 
rings, and so on.


Richard, we had this conversation in private, but we can have it again
in public. J Storrs Hall is right. You can't actually say that the
3-body problem has "various minor aspects that are complex, such as
Pluto's orbit". That's just plain wrong. 


The phonomenon you are describing is known as the "small divisors
problem", and has been studied for several hundred years, with a
particularly thick corpus developed about 150 years ago, if I remember
rightly. The initial hopes of astronomers were that planetary motion
would be exactly as you describe it: that its mostly regular dynamics, 
with just some minor aspects, some minor corrections.  

This hope was dashed. The minor corrections, or perturbations, have 
a denominator, in which appear ratios of periods of orbits. Some of

these denominators can get arbitrarily small, implying that the "small
correction" is in fact unboundedly large. This was discovered, I dunno,
several hundred years ago, and elucidated in the 19th century. Both
Poincare and Einstein made notable contributions. Modern research 
into chaos theory has shed new insight into "what's really going on"; 
it has *not*, however, made planetary motion only a "partially
complicated system".  It is quite fully wild and wooly.  

In a very deep sense, planetary motion is wildly and insanely 
unpredicatable.  Just becaouse we can work out numerical simulations 
for the next million years does not mean that the system is complex 
in only minor ways; this is a fallacious deduction.


Note the probabilites of pluto going bonkers are not comparable
to the sun tunneling into bloomingdale's but are in fact much, much
higher. Pluto could fly off tommorrow, and the probability is big
enough that you have to actually account for it.

The problem with this whole email thread tends to be that many people 
are willing to agree with your conclusions, but dislike the manner in
which they are arrived at. Brushing off planetary motion, or the 
Turing-completeness of Conway's life, just basically points to

a lack of understanding of the basic principles to which you appeal.


Linas,

The difficulty I sometimes have with discussions in this format is that 
it is perfectly acceptable for people to disagree with the ideas, but 
they should keep personal insults OUT of the discussion -- and 
accordingly, in my reactions to other people, I *never* introduce ad 
hominem remarks, I only *respond* to ad hominem insults from others.  I 
responded that way when Josh decided to disagree by using extremely 
insulting language.  To anyone who disagrees politely, I will put in 
huge amounts of effort to meet their criticisms, help clarify the 
issues, apologize for any lack of clarity on my part, etc.


Now, to the subject at hand.

I hear what you are saying about the 3-body problem.  [I would have been 
happier if YOU had managed to phrase it without making assertions about 
what I do and do not understand, because I earned a physics degree, with 
a strong Astronomy component, back in 1979, and I have been aware of 
these issues for a very long time].


Even though you assertively declare that my use of the example of 
plenetary orbits is "just plain wrong", I knew exactly what I was doing 
when I picked the example, and it is precisely correct.


I will explain why.

The core issue has to do with what I actually mean when I say that 
planetary orbits contain a "small amount of complexity".  You are 
interpreting that statement one way, but I am using in a different way. 
 It is my usage that matters, because this discussion is, after all, 
about my paper, the way I used the phrase in that paper, and the way 
that other people who talk about "amounts of complexity" would tend to 
use that phrase.


The "amount of complexity" is all about the exactness of an overall 
scientific explanation for a system.  It is about the extent to which a 
normal, scientific explanation can be set down and used to explain the 
system.  Is it possible to find an explanation that covers the state of 
that system very accurately for a very large fraction of that system's 
lifetime?  If the answer is yes, then the system does not have much 
complexity in it.  If the vast bulk of the lifetime of that system 
cannot be understood by any normal scientific explanation (i.e. cannot 
be predicted), and if the behavior is not completely random, then we 
would s

The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-04 Thread Richard Loosemore

Linas Vepstas wrote:

On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that 
nobody is trying to build a dangerous, unfriendly AGI.  


Yes, OK, granted, self-preservation is a reasonable character trait.

After that 
point, the first friendliness of the first one will determine the 
subsequent motivations of the entire population, because they will 
monitor each other.


Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.

There's also a strong sense that winnner-takes-all, or
first-one-takes-all, as the first one is strongly motivated,
by instinct for self-preservation, to make sure that no other
AGI comes to exist that could threaten, dominate or terminate it.

In fact, the one single winner, out of sheer loneliness and boredom,
might be reduced to running simulations a la Nick Bostrom's simulation
argument (!)

--linas


This is interesting, because you have put your finger on a common 
reaction to the "First AGI will take down the others" idea.


When I talk about this first-to-market effect, I *never* mean that the 
first one will "eliminate" or "exterminate" all the others, and I do not 
mean to imply that it would do so because it feels motives akin to 
self-preservation, or because it does not want to be personally 
dominated, threatened (etc) by some other AGI.


What I mean is that ASSUMING the first one is friendly (that assumption 
being based on a completely separate line of argument), THEN it will be 
obliged, because of its commitment to friendliness, to immediately 
search the world for dangerous AGI projects and quietly ensure that none 
of them are going to become a danger to humanity.  There is absolutely 
no question of it doing this because of a desire for self-preservation, 
or jealousy, feeling threatened, or any of those other motivations, 
because the most important part of the design of the first friendly AGI 
will be that it will not have those motivations.


Not only that, but it will not necessarily wipe out those other AGIs, 
either.  If we value intelligent, sentient life, we may decide that the 
best thing to do with these other AGI designs, if they have reached the 
point of self-awareness, is to let them keep most of their memories but 
modify them slightly so that they can be transferred into the new, 
friendly design.  To be honest, I do not think it is likely that there 
will be others that are functioning at the level of self-awareness at 
that stage, but that's another matter.


So this would be a quiet-but-friendly modification of other systems, to 
put security mechanisms in place, not an aggressive act.  This follows 
directly from the assumption of friendliness of the first AGI.


Some people have talked about aggressive takeovers.  That is a 
completely different kettle of fish, which assumes the first one will be 
aggressive.


For reasons I have stated elsewhere, I think that *in* *practice* the 
first one will not be aggressive.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49645537-bddb78


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> I think your notion that post-grads with powerful machines would only
> operate in the space of ideas that don't work is unfair.

Yeah, i can agree - it was harsh.  My real intention was to suggest
that NOT having a bigger computer is not excuse for not yet having a
design that works.  IF you find a design that works, the bigger
computer will be the inevitable result.

> Your last paragraph actually seems to make an argument for the value of
> clock cycles because it implies general intelligences will come through
> iterations.  More opps/sec enable iterations to be made faster.

I also believe that general intelligence will require a great deal of
cooperative effort.  The frameworks discussion (Richard, et al) could
provide positive pressure toward that end.  I feel we have a great
deal of communications development in order to even begin to express
the essential character of the disparate approaches to the problem,
let alone be able to collaborate on anything but the most basic ideas.
 I don't have a solution (obviously) but I have a vague idea of a type
of problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49620438-6f8601


Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-03 Thread Russell Wallace
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> The biggest brick wall is the small-hardware mindset that has been
> absolutely necessary for decades to get anything actually accomplished on
> the hardware of the day.  But it has caused people to close their minds to
> the vast power of brain level hardware and the computational richness and
> complexity it allows, and has caused them, instead, to look for magic
> conceptual bullets that would allow them to achieve human-like AI on
> hardware that has roughly a millionth the computational, representational,
> and interconnect power of the human brain.  That's like trying to model New
> York City with a town of seven people.  This problem has been compounded by
> the pressure for academic specialization and the pressure to produce
> demonstratable results on the type of hardware most have had access to in
> the past.

Very well put!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49580536-91e968


RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
nstratable results on the type of hardware most have had
access to in the past.



Rather than this small hardware thinking, those in the field of AGI should
open up their minds to the power of big numbers -- complexity as some call
It -- one of the most seminal concepts in all of science.  They should
look at all of the very powerful tools AI has already cooked up for us and
think how these tools can be put together into powerful systems once we
were are free from the stranglehold of massively sub-human hardware - as
we are now starting to be.  They should start thinking how do we actually
do appropriate probabilistic and goal weighted inference in world
knowledge with brain level hardware in real time.



Some have already spend a lot of time thinking about exactly this.  Those
who are interested in AGi -- and haven’t already done so -- should follow
their lead.



Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 6:22 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Edward Porter:I don’t know about you, but I think there are actually a lot
of very bright people in the interrelated fields of AGI, AI, Cognitive
Science, and Brain science.  There are also a lot of very good ideas
floating around.

Yes there are bright people in AGI. But there's no one remotely close to
the level, say, of von Neumann or Turing, right? And do you really think a
revolution such as AGI is going to come about without that kind of
revolutionary, creative thinker? Just by tweaking existing systems, and
increasing computer power and complexity?  Has any intellectual revolution
ever happened that way? (Josh?)
  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49575176-b41b51

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
> 
> Second, You mention the 3-body problem in Newtonian mechanics.  Although 
> I did not use it as such in the paper, this is my poster child of a 
> partial complex system.  I often cite the case of planetary system 
> dynamics as an example of a real physical system that is PARTIALLY 
> complex, because it is mostly governed by regular dynamics (which lets 
> us predict solar eclipse precisely), but also has various minor aspects 
> that are complex, such as Pluto's orbit, braiding effects in planetary 
> rings, and so on.

Richard, we had this conversation in private, but we can have it again
in public. J Storrs Hall is right. You can't actually say that the
3-body problem has "various minor aspects that are complex, such as
Pluto's orbit". That's just plain wrong. 

The phonomenon you are describing is known as the "small divisors
problem", and has been studied for several hundred years, with a
particularly thick corpus developed about 150 years ago, if I remember
rightly. The initial hopes of astronomers were that planetary motion
would be exactly as you describe it: that its mostly regular dynamics, 
with just some minor aspects, some minor corrections.  

This hope was dashed. The minor corrections, or perturbations, have 
a denominator, in which appear ratios of periods of orbits. Some of
these denominators can get arbitrarily small, implying that the "small
correction" is in fact unboundedly large. This was discovered, I dunno,
several hundred years ago, and elucidated in the 19th century. Both
Poincare and Einstein made notable contributions. Modern research 
into chaos theory has shed new insight into "what's really going on"; 
it has *not*, however, made planetary motion only a "partially
complicated system".  It is quite fully wild and wooly.  

In a very deep sense, planetary motion is wildly and insanely 
unpredicatable.  Just becaouse we can work out numerical simulations 
for the next million years does not mean that the system is complex 
in only minor ways; this is a fallacious deduction.

Note the probabilites of pluto going bonkers are not comparable
to the sun tunneling into bloomingdale's but are in fact much, much
higher. Pluto could fly off tommorrow, and the probability is big
enough that you have to actually account for it.

The problem with this whole email thread tends to be that many people 
are willing to agree with your conclusions, but dislike the manner in
which they are arrived at. Brushing off planetary motion, or the 
Turing-completeness of Conway's life, just basically points to
a lack of understanding of the basic principles to which you appeal.

> This is the reason why your original remarks deserved to be called 
> 'bullshit':  this kind of confusion would be forgivable in an 
> undergraduate essay, and would have been forgivable in our debate here, 
> except that it was used as a weapon in a contemptuous, sweeping 
> dismissal of my argument.

Actually, his original remarks were spot-on and quite correct.
I think that you are the one who is confused, and I also think
that this kind of name-calling and vulgarism was quite uncalled-for.

-- linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49572096-cabddb


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote:
> Yes there are bright people in AGI. But there's no one remotely close to the 
level, say, of von Neumann or Turing, right? And do you really think a 
revolution such as AGI is going to come about without that kind of 
revolutionary, creative thinker? Just by tweaking existing systems, and 
increasing computer power and complexity?  Has any intellectual revolution 
ever happened that way? (Josh?)

Yes, I think so. Can anybody name the von Neumanns in the hardware field in 
the past 2 decades? And yet look at the progress. I happen to think that 
there are plenty of smart people in AI and related fields, and LOTS of really 
smart people in computational neuroscience. Even without a Newton we are 
likely to get AGI on Kurzweil's schedule, e.g. 2029.

As I pointed out before, human intelligence got here from monkeys in a 
geological eyeblink, and perforce did it in small steps. So if enough people 
keep pushing in all directions, we'll get there (and learn a lot more 
besides). 

If we can take AGI interest now vis-a-vis that of a few years ago as a trend, 
there could be a major upsurge in the number of smart people looking into it 
in the next decade. So we could yet get our new Newton... I think we've 
already had one, Marvin Minsky. He's goddamn smart. People today don't 
realize just how far AI came from nothing up through about 1970 -- and it was 
real AI, what we now call AGI.

BTW, It's also worth pointing out that increasing computer power just flat 
makes the programming easier. Example: nearest-neighbor methods in 
high-dimensional spaces, a very useful technique but hard to program because 
of the limited and arcane datastructures and search methods needed. Given 
enough cpu, forget the balltrees and rip thru the database linearly. Suddenly 
simple, more robust, and there are more things you can do.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49568162-324646


Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
> When the first AGI is built, its first actions will be to make sure that 
> nobody is trying to build a dangerous, unfriendly AGI.  

Yes, OK, granted, self-preservation is a reasonable character trait.

> After that 
> point, the first friendliness of the first one will determine the 
> subsequent motivations of the entire population, because they will 
> monitor each other.

Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.

There's also a strong sense that winnner-takes-all, or
first-one-takes-all, as the first one is strongly motivated,
by instinct for self-preservation, to make sure that no other
AGI comes to exist that could threaten, dominate or terminate it.

In fact, the one single winner, out of sheer loneliness and boredom,
might be reduced to running simulations a la Nick Bostrom's simulation
argument (!)

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49566127-e1a092


Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote:
> 
> One of them once told me that in Japan it was common for high school boys
> who were interested in math, science, or business to go to abacus classes
> after school or on weekends.  He said once they fully mastered using
> physical abacuses, they were taught to create a visually imagined abacus
> in their mind that they could operate faster than a physical one.
[...]
> 
> He said his talent was not that unusual among bright Japanese, that many
> thousands of Japan businessmen  carry such mental abacuses with them at
> all times.

Marvellous!

So .. one can teach oneself to be an idiot-savant, in a way. Since
Ramanujan is a bit of a legendary hero in math circles, the notion
that one might be able to teach oneself this ability, rather than 
"being born with it", could trigger some folks to try it.  As it
seems a bit tedious ... it might be appealing only to those types
of folks who have the desire to memorize a million digits of Pi ...
I know just the person ... Plouffe ... 

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49561332-ee1318


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Re: The following statement in Linas Vepstas’s  10/3/2007 5:51 PM post:

P.S. THE INDIAN MATHEMATICIAN RAMANUJAN SEEMS TO HAVE MANAGED TO TRAIN A
SET OF NEURONS IN HIS HEAD TO BE A VERY FAST SYMBOLIC MULTIPLIER/DIVIDER.
WITH THIS, HE WAS ABLE TO SEE VAST AMOUNTS (SIX VOLUMES WORTH BEFORE DYING
AT AGE 26) OF STRANGE AND INTERESTING RELATIONSHIPS BETWEEN CERTAIN
EQUATIONS THAT WERE OTHERWISE QUITE OPAQUE TO OTHER HUMAN BEINGS. SO,
"RUNNING AN EMULATOR IN YOUR HEAD" IS NOT IMPOSSIBLE, EVEN FOR HUMANS;
ALTHOUGH, ADMITEDLY, ITS EXTREMELY RARE.

As a young patent attorney I worked in a firm in NYC that did a lot of
work for a major Japanese Electronics company.  Each year they sent a
different Japanese employee to our firm to, among other things, improve
their English and learn more about U.S. patent law.  I made a practice of
having lunch with these people because I was fascinated with Japan.

One of them once told me that in Japan it was common for high school boys
who were interested in math, science, or business to go to abacus classes
after school or on weekends.  He said once they fully mastered using
physical abacuses, they were taught to create a visually imagined abacus
in their mind that they could operate faster than a physical one.

I asked if his still worked.  He said it did, and that he expected it to
continue to do so for the rest of his life.  To prove it he asked me to
pick any two three digit numbers and he would see if he could get the
answer faster than I could on a digital calculator.  He won, he had the
answer before I had finished typing in the numbers on the calculator.

He said his talent was not that unusual among bright Japanese, that many
thousands of Japan businessmen  carry such mental abacuses with them at
all times.

So you see how powerful representational and behavioral learning can be in
the human mind.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:51 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not
> require recursive self improvement,
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use
> it, implying it is necessary for human-level AGI.

Nah. A few people have suggested that an extremely-low IQ "internet worm"
that is capable of modifying its own code might be able to ratchet itself
up to human intelligence levels.  In-so-far as it "modifies its own code",
its RSI.

First, I don't tink such a thing is likely. Secondly, even if its likely,
one can implement an entirely equivalent thing that doesn't actually "self
modify" in this way, by using e.g. scheme or lisp,
or even with the proper stuructures, in C.

I think that, at this level, talking about "code that can modify itself"
is smoke-n-mirrors. Self-modifying code is just one of many things in a
programmer's kit bag, and there are plenty of equivalenet formulations
that don't actually require changing source code and
recompiling.

Put it this way: if I were an AGI, and I was prohibited from recompiling
my own program, I could still emulate a computer with pencil and paper,
and write programs for my pencil-n-paper computer. (I wouldn't use
pencil-n-paper, of course, I'd "do it in my head"). I might be able to
do this pencil-paper emulatation pretty danged fast (being AGI and all),
and then re-incorporate those results back into my own thinking.

In fact, I might choose to do all of my thinking on my pen-n-paper
emulator, and, since I was doing it all in my head anyway, I might not
bother to tell my creator that I was doing this. (which is not to say it
would be undetectable .. creator might notice that an inordinate
amount of cpu time is being used in one area, while other previously
active areas have gone dormant).

So a prohibition from modifying one's own code is not really much of a
prohibition at all.

--linas

p.s. The Indian mathematician Ramanujan seems to have managed to train a
set of neurons in his head to be a very fast symbolic multiplier/divider.
With this, he was able to see vast amounts (six volumes worth before
dying at age 26) of strange and interesting relationships between certain
equations that were otherwise quite opaque to other human beings. So,
"running an emulator in your head" is not impossible, even for humans;
although, admitedly, its extremely rare.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49534399-4aa5a4

Re: [agi] Religion-free technical content

2007-10-03 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > So do you claim that there are universal moral truths that can be applied
> > unambiguously in every situation?
> 
> What a stupid question.  *Anything* can be ambiguous if you're clueless. 
> The moral truth of "Thou shalt not destroy the universe" is universal.  The 
> ability to interpret it and apply it is clearly not.
> 
> Ambiguity is a strawman that *you* introduced and I have no interest in 
> defending.

I mean that ethics or friendliness is an algorithmically complex function,
like our legal system.  It can't be simplified.  In this sense, I agree with
Richard Loosemore that it would have to be implemented as thousands (or
millions) of soft constraints.

However, I don't believe that friendliness can be made stable through RSI.  We
can summarize the function's decision process as "what would the average human
do in this situation?"  (This is not a simplification.  It still requires a
complex model of the human brain).  The function therefore has to be
modifiable because human ethics changes over time, e.g. attitudes toward the
rights of homosexuals, the morality of slavery, or whether hanging or
crucifixion is an appropriate form of punishment.

Second, as I mentioned before, RSI is necessarily experimental, and therefore
evolutionary, and the only stable goal in an evolutionary process is rapid
reproduction and acquisition of resources.  As long as humans are needed to
supply resources by building computers and interacting with them via hybrid
algorithms, AGI will be cooperative.  But as AGI grows more powerful, humans
will be less significant and more like a lower species that competes for
resources.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49533160-494085


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Tintner
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you, 
but I think there are actually a lot of very bright people in the interrelated 
fields of AGI, AI, Cognitive Science, and Brain science.  There are also a lot 
of very good ideas floating around.

Yes there are bright people in AGI. But there's no one remotely close to the 
level, say, of von Neumann or Turing, right? And do you really think a 
revolution such as AGI is going to come about without that kind of 
revolutionary, creative thinker? Just by tweaking existing systems, and 
increasing computer power and complexity?  Has any intellectual revolution ever 
happened that way? (Josh?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49530636-069600

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
To Mike Douglas regarding the below comment to my prior post:

I think your notion that post-grads with powerful machines would only
operate in the space of ideas that don’t work is unfair.

A lot of post-grads may be drones, but some of them are cranking some
really good stuff.  The article, Learning a Dictionary of Shape-Components
in Visual Cortex: Comparisons with Neurons, Humans and Machines, by Thomas
Serre (accessible by Google), which I cited the other day, is a prime
example.

I don’t know about you, but I think there are actually a lot of very
bright people in the interrelated fields of AGI, AI, Cognitive Science,
and Brain science.  There are also a lot of very good ideas floating
around.  And having seen how much increased computing power has already
sped up and dramatically increased what all these fields are doing, I am
confident that multiplying by several thousand fold more the power of the
machine people in such fields can play with would greatly increase their
productivity.

I am not a fan of huge program size per se, but I am a fan of being able
to store and process a lot of representation.  You can’t compute human
level world knowledge without such power.  That’s the major reason why the
human brain is more powerful than the brains of rats, cats, dogs, and
monkeys -- because it has more representational and processing power.

And although clock cycles can be wasted doing pointless things such as
do-nothing loops, generally to be able to accomplish a given useful
computational task in less times makes a system smarter at some level.

Your last paragraph actually seems to make an argument for the value of
clock cycles because it implies general intelligences will come through
iterations.  More opps/sec enable iterations to be made faster.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Dougherty [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to
> play with, things would really start jumping.  Within ten years the
> equivents of such machines could easily be sold for somewhere between
> $10k and $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing hardware
you are proposing is that they can more quickly exhaust the space of ideas
that won't work.  Just because a program has more lines of code does not
make it more elegant and just because there are more clock cycles per unit
time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski gasket
by hand?  There appears to be no order at all.  Eventually over enough
iterations the pattern becomes clear.  I have little doubt that general
intelligence will develop in a similar way:  there will be many apparently
unrelated efforts that eventually flesh out in function until they
overlap.  It might not be seamless but there is not enough evidence that
human cognitive processing is a seamless process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49523228-fa9460

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not require
> recursive self improvement, 
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use it,
> implying it is necessary for human-level AGI.

Nah. A few people have suggested that an extremely-low IQ "internet
worm" that is capable of modifying its own code might be able to ratchet
itself up to human intelligence levels.  In-so-far as it "modifies its
own code", its RSI.

First, I don't tink such a thing is likely. Secondly, even if its
likely, one can implement an entirely equivalent thing that doesn't
actually "self modify" in this way, by using e.g. scheme or lisp, 
or even with the proper stuructures, in C.

I think that, at this level, talking about "code that can modify
itself" is smoke-n-mirrors. Self-modifying code is just one of many
things in a programmer's kit bag, and there are plenty of equivalenet
formulations that don't actually require changing source code and 
recompiling. 

Put it this way: if I were an AGI, and I was prohibited from recompiling
my own program, I could still emulate a computer with pencil and paper,
and write programs for my pencil-n-paper computer. (I wouldn't use
pencil-n-paper, of course, I'd "do it in my head"). I might be able to 
do this pencil-paper emulatation pretty danged fast (being AGI and all), 
and then re-incorporate those results back into my own thinking. 

In fact, I might choose to do all of my thinking on my pen-n-paper
emulator, and, since I was doing it all in my head anyway, I might not 
bother to tell my creator that I was doing this. (which is not to say
it would be undetectable .. creator might notice that an inordinate 
amount of cpu time is being used in one area, while other previously
active areas have gone dormant).

So a prohibition from modifying one's own code is not really much
of a prohibition at all.

--linas

p.s. The Indian mathematician Ramanujan seems to have managed to train a
set of neurons in his head to be a very fast symbolic multiplier/divider. 
With this, he was able to see vast amounts (six volumes worth before 
dying at age 26) of strange and interesting relationships between certain 
equations that were otherwise quite opaque to other human beings. So, 
"running an emulator in your head" is not impossible, even for humans; 
although, admitedly, its extremely rare.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49514235-ad4bd3


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to play
> with, things would really start jumping.  Within ten years the equivents
> of such machines could easily be sold for somewhere between $10k and
> $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing
hardware you are proposing is that they can more quickly exhaust the
space of ideas that won't work.  Just because a program has more lines
of code does not make it more elegant and just because there are more
clock cycles per unit time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski
gasket by hand?  There appears to be no order at all.  Eventually over
enough iterations the pattern becomes clear.  I have little doubt that
general intelligence will develop in a similar way:  there will be
many apparently unrelated efforts that eventually flesh out in
function until they overlap.  It might not be seamless but there is
not enough evidence that human cognitive processing is a seamless
process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49495105-78df69


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter


Again a well reasoned response.

With regard to the limitations of AM, I think if the young Doug Lenat and
those of his generation had had 32K processor Blue Gene Ls, with 4TBytes
of RAM, to play with they would have soon started coming up with things
way way beyond AM.

In fact, if the average AI post-grad of today had such hardware to play
with, things would really start jumping.  Within ten years the equivents
of such machines could easily be sold for somewhere between $10k and
$100k, and lots of post-grads will be playing with them.

Hardware to the people!

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Thanks!

It's worthwhile being specific about levels of interpretation in the
discussion of self-modification. I can write self-modifying assembly code
that yet does not change the physical processor, or even its microcode it
it's one of those old architectures. I can write a self-modifying Lisp
program that doesn't change the assembly language interpreter that's
running
it.

So it's certainly possible to push the self-modification up the
interpretive
abstraction ladder, to levels designed to handle it cleanly. But the basic

point, I think, stands: there has to be some level that is both
controlling
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain
structure since the paleolithic, but I would claim that culture *is* the
software and it has been upgraded drastically. And I would agree that the
vast bulk of human self-improvement has been at this software level, the
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to
understand them well enough to do basic engineering on them -- a
self-model.
However, we didn't need that to build all the science and culture we have
so
far, a huge software self-improvement. That means to me that it is
possible
to abstract out the self-model until the part you need to understand and
modify is some tractable kernel. For human culture that is the concept of
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it

could be recursively self improving at a very abstract, highly interpreted

level, and still have a huge amount to learn before it do anything about
the
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely
going
to be one of the enabling factors, over the next decade or two. But I
don't
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
>
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
>
> I did have a question about the following section
>
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND
> WHATNOT, BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY
> CIVILIZATION HAS (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO
> SCIENCE AS THE METHODOLOGY OF CHOICE FOR ITS SAGES.”
>
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
>
> My question is: if a machine’s world model includes the system’s model
> of itself and its own learned mental representation and behavior
> patterns, is it not possible that modification of these learned
> representations and behaviors could be enough to provide what you are
> talking about -- without requiring modifying its code at some deeper
> level.
>
> For example, it is commonly said that humans and their brains have
> changed very little in the last 30,000 years, that if a new born from
> that age were raised in our society, nobody would notice the
> difference.  Yet in the last 30,000 years the sophistication of
> mankind’s understanding of, and ability to manipulate, the world has
> grown exponentially.  There has been tremendous changes in code, at
> the level of learned representations and learned mental behaviors,
> such as advances in mathematics, science, and technology, but there
> has been very little, if any, significant changes in code at the level
> of inherited brain hardware and software.
>
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of
> complexity they could not otherwise even begin to.  But my belief is
> that

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
Thanks!

It's worthwhile being specific about levels of interpretation in the 
discussion of self-modification. I can write self-modifying assembly code 
that yet does not change the physical processor, or even its microcode it 
it's one of those old architectures. I can write a self-modifying Lisp 
program that doesn't change the assembly language interpreter that's running 
it. 

So it's certainly possible to push the self-modification up the interpretive 
abstraction ladder, to levels designed to handle it cleanly. But the basic 
point, I think, stands: there has to be some level that is both controlling 
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain 
structure since the paleolithic, but I would claim that culture *is* the 
software and it has been upgraded drastically. And I would agree that the 
vast bulk of human self-improvement has been at this software level, the 
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to 
understand them well enough to do basic engineering on them -- a self-model. 
However, we didn't need that to build all the science and culture we have so 
far, a huge software self-improvement. That means to me that it is possible 
to abstract out the self-model until the part you need to understand and 
modify is some tractable kernel. For human culture that is the concept of 
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it 
could be recursively self improving at a very abstract, highly interpreted 
level, and still have a huge amount to learn before it do anything about the 
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely going 
to be one of the enabling factors, over the next decade or two. But I don't 
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I 
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
> 
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
> 
> I did have a question about the following section
> 
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
> BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
> (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
> METHODOLOGY OF CHOICE FOR ITS SAGES.”
> 
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
> 
> My question is: if a machine’s world model includes the system’s model of
> itself and its own learned mental representation and behavior patterns, is
> it not possible that modification of these learned representations and
> behaviors could be enough to provide what you are talking about -- without
> requiring modifying its code at some deeper level.
> 
> For example, it is commonly said that humans and their brains have changed
> very little in the last 30,000 years, that if a new born from that age
> were raised in our society, nobody would notice the difference.  Yet in
> the last 30,000 years the sophistication of mankind’s understanding of,
> and ability to manipulate, the world has grown exponentially.  There has
> been tremendous changes in code, at the level of learned representations
> and learned mental behaviors, such as advances in mathematics, science,
> and technology, but there has been very little, if any, significant
> changes in code at the level of inherited brain hardware and software.
> 
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of complexity
> they could not otherwise even begin to.  But my belief is that when
> executing such behaviors or remembering such representations, the basic
> brain mechanisms involved – probability, importance, and temporal based
> inference; instantiating general patterns in a context appropriate way;
> context sensitive pattern-based memory access; learned patterns of
> sequential attention shifts, etc. -- are all virtually identical to ones
> used by our ancestors 30,000 years ago.
> 
> I think in the coming years there will be lots of changes in AGI code at a
> level corresponding to the human inherited brain level.  But once human
> level AGI has been created -- with what will obviously have to a learning
> capability as powerful, adaptive, exploratory, creative, and as capable of
> building upon its own advances at that of a human -- it is not clear to me
> it would require further changes at a level equivalent to the human
> inherited brain level to continue to operate and learn as well as a human,
> any more than have the tremendous advances of human civilization in the
> last 30,000 years.
> 
> Your

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
>From what you say below it would appear human-level AGI would not require
recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).

I wonder what percent of the AGI community would accept that definition? A
lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its current
> behavior by changing to a new state with a modified behavior, and then
> from that new state (arguably "recursively") improving behavior to yet
> another new state, and so on and so forth?  If so, why wouldn't any
> system doing ongoing automatic learning that changed its behavior be
> an RSI system.

No; learning is just learning.

For example, humans are known to have 5 to 9 short-term memory "slots"
(this has been measured by a wide variety of psychology experiments, e.g.
ability to recall random data, etc.)

When reading a book, watching a movie, replying to an email, or solving
a problem, humans presumably use many or all of these slots (watching
a movie: to remember the characters, plot twists, recent scenes, etc.
Replying to this email: to remember the point that I'm trying to make,
while simultaneously composing a gramatical, pleasant-to-read sentence.)

Now, suppose I could learn enough neuropsychology to grow some extra
neurons in a petri dish, then implant them in my brain, and up my
short-term memory slots to, say, 50-100.  The new me would be like the old
me, except that I'd probably find movies and books to be trite
and boring, as they are threaded together from only a half-dozen
salient characteristics and plot twists (how many characters and
situations are there in Jane Austen's Pride & Prejudice?
Might it not seem like a children's book, since I'll be able
to "hold in mind" its entire plot, and have a whole lotta
short-term memory slots left-over for other tasks?).

Music may suddenly seem lame, being at most a single melody line
that expounds on a chord progression consisting of a half-dozen chords,
each chord consisting of 4-6 notes.  The new me might come to like
multiple melody lines exploring a chord progression of some 50 chords,
each chord being made of 14 or so notes...

The new me would probably be a better scientist: being able to
remember and operate on 50-100 items in short term memory will likely
allow me to decipher a whole lotta biochemistry that leaves current
scientists puzzled.  And after doing that, I might decide that some other
parts of my brain could use expansion too.

*That* is RSI.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49387922-edf0e9


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Josh,

Thank you for your reply, copied below.  It was – as have been many of
your posts – thoughtful and helpful.

I did have a question about the following section

“THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
(MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
METHODOLOGY OF CHOICE FOR ITS SAGES.”

“THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”

My question is: if a machine’s world model includes the system’s model of
itself and its own learned mental representation and behavior patterns, is
it not possible that modification of these learned representations and
behaviors could be enough to provide what you are talking about -- without
requiring modifying its code at some deeper level.

For example, it is commonly said that humans and their brains have changed
very little in the last 30,000 years, that if a new born from that age
were raised in our society, nobody would notice the difference.  Yet in
the last 30,000 years the sophistication of mankind’s understanding of,
and ability to manipulate, the world has grown exponentially.  There has
been tremendous changes in code, at the level of learned representations
and learned mental behaviors, such as advances in mathematics, science,
and technology, but there has been very little, if any, significant
changes in code at the level of inherited brain hardware and software.

Take for example mathematics and algebra.  These are learned mental
representations and behaviors that let a human manage levels of complexity
they could not otherwise even begin to.  But my belief is that when
executing such behaviors or remembering such representations, the basic
brain mechanisms involved – probability, importance, and temporal based
inference; instantiating general patterns in a context appropriate way;
context sensitive pattern-based memory access; learned patterns of
sequential attention shifts, etc. -- are all virtually identical to ones
used by our ancestors 30,000 years ago.

I think in the coming years there will be lots of changes in AGI code at a
level corresponding to the human inherited brain level.  But once human
level AGI has been created -- with what will obviously have to a learning
capability as powerful, adaptive, exploratory, creative, and as capable of
building upon its own advances at that of a human -- it is not clear to me
it would require further changes at a level equivalent to the human
inherited brain level to continue to operate and learn as well as a human,
any more than have the tremendous advances of human civilization in the
last 30,000 years.

Your implication that civilization had improved itself by moving “from
religion to philosophy to science” seems to suggest that the level of
improvement you say is needed might actually be at the level of learned
representation, including learned representation of mental behaviors.



As a minor note, I would like to point out the following concerning your
statement that:

“ALL AI LEARNING SYSTEMS TO DATE HAVE BEEN "WIND-UP TOYS" “

I think a lot of early AI learning systems, although clearly toys when
compared with humans in many respects, have been amazingly powerful
considering many of them ran on roughly fly-brain-level hardware.  As I
have been saying for decades, I know which end is up in AI -- its
computational horsepower. And it is coming fast.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 10:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!

> I have one major question for Josh.  You said
>
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
>
> Could you please elaborate on exactly what the “complex core of the
> whole problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge
is
seriously trying to design a 'never ending learning' machine." (Private
communication)

By which he meant what we tend to call "RSI" here. I think the "coming up
with
new representations and techniques" part is pretty straightforward, the
question is how to do it. Search works, a la a GA, if y

Re: [agi] Religion-free technical content

2007-10-03 Thread Richard Loosemore


I criticised your original remarks because they demonstrated a complete 
lack of understanding of what complex systems actually are.  You said 
things about complex systems that were, quite frankly, ridiculous: 
Turing-machine equivalence, for example, has nothing to do with this.


In your more lengthy criticism, below, you go on to make many more 
statements that are confused, and you omit key pieces of the puzzle that 
I went to great lengths to explain in my paper.  In short, you 
misrepresent what I said and what others have said, and you show signs 
that you did not read the paper, but just skimmed it.


I will deal with your points one at a time.


J Storrs Hall, PhD wrote:

On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

I find your argument quotidian and lacking in depth. ...


What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
of complete technical ignorance, patronizing insults and breathtaking 
arrogance. ...


I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all 
before (that's what "quotidian" means) and others have taken it farther than 
you have.


You write (proceedings p. 161) "The term 'complex system' is used to describe 
PRECISELY those cases where the global behavior of the system shows 
interesting regularities, and is not completely random, but where the nature 
of the interaction of the components is such that we would normally expect 
the consequences of those interactions to be beyond the reach of analytic 
solutions." (emphasis added)


But of course even a 6th-degree polynomial is beyond the reach of an analytic 
solution, as is the 3-body problem in Newtonian mechanics. And indeed the 
orbit of Pluto has been shown to be chaotic. But we can still predict with 
great confidence when something as finicky as a solar eclipse will occur 
thousands of years from now. So being beyond analytic solution does not mean 
unpredictable in many, indeed most, practical cases.


There are different degrees of complexity in systems:  there is no black 
and white distinction between "pure complex systems" on the one hand and 
"non-complex" systems on the other.


I made this point in a number of ways in my paper, most especially by 
talking about the "degree of complexity" to be expected in intelligent 
systems, and whether or not they have a "significant amount" of 
complexity.  At no point do I try to claim, or imply, that a system that 
possesses ANY degree of complexity is automatically banged over into the 
same category as the most extreme complex systems.  In fact, I 
explicitly deny it:


"One of the main arguments advanced in this paper is
 that complexity can be present in AI systems in a subtle way.
 This is in contrast to the widespread notion that the opposite
 is true: that those advocating the idea that intelligence involves
 complexity are trying to assert that intelligent behavior should
 be a floridly emergent property of systems in which there is no
 relationship whatsoever between the system components and the
 overall behavior.
 While there may be some who advocate such an extreme-emergence
 agenda, that is certainly not what is proposed here. It is
 simply not true, in general, that complexity needs to make
 itself felt in a dramatic way. Specifically, what is claimed
 here is that complexity can be quiet and unobtrusive, while
 at the same time having a significant impact on the overall
 behavior of an intelligent system."


In your criticism, you misrepresent my argument as a claim that IF any 
system has the smallest amount of complexity in its makeup, THEN it 
should be as totally unpredictable as the most extreme form of complex 
system.  I will show, below, how you make this misrepresentation again 
and again.


First, you talk about 6th-degree polynomials. These are not "systems" in 
any meaningful sense of the word, they are functions.  This is actually 
just a red herring.


Second, You mention the 3-body problem in Newtonian mechanics.  Although 
I did not use it as such in the paper, this is my poster child of a 
partial complex system.  I often cite the case of planetary system 
dynamics as an example of a real physical system that is PARTIALLY 
complex, because it is mostly governed by regular dynamics (which lets 
us predict solar eclipse precisely), but also has various minor aspects 
that are complex, such as Pluto's orbit, braiding effects in planetary 
rings, and so on.


This fits my definition of complexity (which you quote above) perfectly: 
 there do exist "interesting regularities" in the global behavior of 
orbiting bodies (e.g. the presence of ring systems, and the presence of 
braiding effects in those rings systems) that appear to be beyond the 
reach of analytic explanation.


But you cite this as an example of something that contradicts my 
arg

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
> 
> So could you, or someone, please define exactly what its meaning is?
> 
> Is it any system capable of learning how to improve its current behavior
> by changing to a new state with a modified behavior, and then from that
> new state (arguably "recursively") improving behavior to yet another new
> state, and so on and so forth?  If so, why wouldn't any system doing
> ongoing automatic learning that changed its behavior be an RSI system.

No; learning is just learning. 

For example, humans are known to have 5 to 9 short-term memory "slots"
(this has been measured by a wide variety of psychology experiments,
e.g. ability to recall random data, etc.)

When reading a book, watching a movie, replying to an email, or solving 
a problem, humans presumably use many or all of these slots (watching 
a movie: to remember the characters, plot twists, recent scenes, etc.
Replying to this email: to remember the point that I'm trying to make,
while simultaneously composing a gramatical, pleasant-to-read sentence.)

Now, suppose I could learn enough neuropsychology to grow some extra
neurons in a petri dish, then implant them in my brain, and up my
short-term memory slots to, say, 50-100.  The new me would be like
the old me, except that I'd probably find movies and books to be trite 
and boring, as they are threaded together from only a half-dozen 
salient characteristics and plot twists (how many characters
and situations are there in Jane Austen's Pride & Prejudice? 
Might it not seem like a children's book, since I'll be able 
to "hold in mind" its entire plot, and have a whole lotta 
short-term memory slots left-over for other tasks?). 

Music may suddenly seem lame, being at most a single melody line 
that expounds on a chord progression consisting of a half-dozen chords, 
each chord consisting of 4-6 notes.  The new me might come to like 
multiple melody lines exploring a chord progression of some 50 chords, 
each chord being made of 14 or so notes...

The new me would probably be a better scientist: being able to 
remember and operate on 50-100 items in short term memory will
likely allow me to decipher a whole lotta biochemistry that leaves
current scientists puzzled.  And after doing that, I might decide
that some other parts of my brain could use expansion too.

*That* is RSI.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49339506-bf6376


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!
 
> I have one major question for Josh.  You said
> 
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS 
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
> 
> Could you please elaborate on exactly what the “complex core of the whole
> problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge is 
seriously trying to design a 'never ending learning' machine."
(Private communication)

By which he meant what we tend to call "RSI" here. I think the "coming up with 
new representations and techniques" part is pretty straightforward, the 
question is how to do it. Search works, a la a GA, if you have billions of 
years and trillions of organisms to work with. I personally am too impatient, 
so I'd like to understand how the human brain does it in billions of seconds 
and 3 pounds of mush.

Another way to understand the problem is to say that all AI learning systems 
to date have been "wind-up toys" -- they could learn stuff in some small 
space of possibilities, and then they ran out of steam. That's what happened 
famously with AM and Eurisko.

I conjecture that this will happen with ANY fixed learning process. That means 
that for RSI, the learning process must not only improve the world model and 
whatnot, but must improve (=> modify) *itself*. Kind of the way civilization 
has (more or less) moved from religion to philosophy to science as the 
methodology of choice for its sages.

That, of course, is self-modifying code -- the dark place in a computer 
scientist's soul where only the Kwisatz Haderach can look.   :^)

> Why for example would a Novamente-type system’s representations and
> techniques not be capable of being self-referential in the manner you seem
> to be implying is both needed and currently missing?

It might -- I think it's close enough to be worth the experiment. BOA/Moses 
does have a self-referential element in the Bayesian analysis of the GA 
population. Will it be enough to invent elliptic function theory and 
zero-knowledge proofs and discover the Krebs cycle and gamma-ray bursts and 
write Finnegan's Wake and Snow Crash? We'll see...
 
Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49291559-b3bbfd

Re: [agi] Religion-free technical content

2007-10-03 Thread Mark Waser

So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?


What a stupid question.  *Anything* can be ambiguous if you're clueless. 
The moral truth of "Thou shalt not destroy the universe" is universal.  The 
ability to interpret it and apply it is clearly not.


Ambiguity is a strawman that *you* introduced and I have no interest in 
defending.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49291152-b0abd6


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > I find your argument quotidian and lacking in depth. ...

> What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
> of complete technical ignorance, patronizing insults and breathtaking 
> arrogance. ...

I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all 
before (that's what "quotidian" means) and others have taken it farther than 
you have.

You write (proceedings p. 161) "The term 'complex system' is used to describe 
PRECISELY those cases where the global behavior of the system shows 
interesting regularities, and is not completely random, but where the nature 
of the interaction of the components is such that we would normally expect 
the consequences of those interactions to be beyond the reach of analytic 
solutions." (emphasis added)

But of course even a 6th-degree polynomial is beyond the reach of an analytic 
solution, as is the 3-body problem in Newtonian mechanics. And indeed the 
orbit of Pluto has been shown to be chaotic. But we can still predict with 
great confidence when something as finicky as a solar eclipse will occur 
thousands of years from now. So being beyond analytic solution does not mean 
unpredictable in many, indeed most, practical cases.

We've spent five centuries learning how to characterize, find the regularities 
in, and make predictions about, systems that are in your precise definition, 
complex. We call this science. It is not about analytic solutions, though 
those are always nice, but about testable hypotheses in whatever form stated. 
Nowadays, these often come in the form of computational models.

You then talk about the "Global-Local Disconnect" as if that were some gulf 
unbridgeable in principle the instant we find a system is complex. But that 
contradicts the fact that science works -- we can understand a world of 
bouncing molecules and sticky atoms in terms of pressure and flammability. 
Science has produced a large number of levels of explanation, many of them 
causally related, and will continue doing so. But there is not (and never 
will be) any overall closed-form analytic solution.

The physical world is, in your and Wolfram's words, "computationally 
irreducible". But computational irreducibility is a johnny-come-lately 
retread of a very key idea, Gödel incompleteness, that forms the basis of 
much of 20th-century mathematics, including computer science. It is PROVABLE 
that any system that is computationally universal cannot be predicted, in 
general, except by simulating its computation. This was known well before 
Wolfram came along. He didn't say diddley-squat that was new in ANKoS.

So, any system that is computationally universal, i.e. Turing-complete, i.e. 
capable of modelling a Universal Turing machine, or a Post production system, 
or the lambda calculus, or partial recursive functions, is PROVABLY immune to 
analytic solution. And yet, guess what? we have computer *science*, which has 
found many regularities and predictabilities, much as physics has found 
things like Lagrange points that are stable solutions to special cases of the 
3-body problem.

One common poster child of "complex systems" has been the fractal beauty of 
the Mandelbrot set, seemingly generating endless complexity from a simple 
formula. Well duh -- it's a recursive function.

I find it very odd that you spend more than a page on Conway's Life, talking 
about ways to characterize it beyond the "generative capacity" -- and yet you 
never mention that Life is Turing-complete. It certainly isn't obvious; it 
was an open question for a couple of decades, I think; but it has been shown 
able to model a Universal Turing machine. Once that was proven, there were 
suddenly a LOT of things known about Life that weren't known before. 

Your next point, which you call Hidden Complexity, is very much like the 
phenomenon I call Formalist Float (B.AI 89-101). Which means, oddly enough, 
that we've come to very much the same conclusion about what the problem with 
AI to date has been -- except that I don't buy your GLD at all, except 
inasmuch as it says that science is hard.

Okay, so much for science. On to engineering, or how to build an AGI. You 
point out that connectionism, for example, has tended to study mathematically 
tractable systems, leading them to miss a key capability. But that's exactly 
to be expected if they build systems that are not computationally universal, 
incapable of self-reference and recursion -- and that has been said long and 
loud in the AI community since Minsky published Perceptrons, even before the 
connectionist resurgence in the 80's.

You propose to take core chunks of complex system and test them empirically, 
finding scientific characterizations of their behavior that could be used in 
a larger system. Great! This is just what Hugo de Garis has been say

Re: [agi] Religion-free technical content

2007-10-02 Thread Matt Mahoney

--- Mark Waser <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> You're missing the point.  Your questions are regarding interpretations 
> as to whether or not certain conditions are equivalent to my statements, not
> challenges to my statements.

So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?  If so, could you please state one?  If you
claim the rules you stated previously are not ambiguous, then could you answer
my original questions?

Or do you mean that there are universal moral truths, but it is not possible
to state any of them unambiguously?


> 
> - Original Message - 
> From: "Matt Mahoney" <[EMAIL PROTECTED]>
> To: 
> Sent: Tuesday, October 02, 2007 7:12 PM
> Subject: **SPAM** Re: [agi] Religion-free technical content
> 
> 
> > --- Mark Waser <[EMAIL PROTECTED]> wrote:
> >
> >> > Do you really think you can show an example of a true moral universal?
> >>
> >> Thou shalt not destroy the universe.
> >> Thou shalt not kill every living and/or sentient being including 
> >> yourself.
> >> Thou shalt not kill every living and/or sentient except yourself.
> >
> > Suppose that the human race was destructively uploaded.  Is this the same 
> > as
> > killing the human race?  What if the technology was not perfect and only X
> > percent of your memories could be preserved?  What value of X is the 
> > threshold
> > for killing vs. saving the human race?
> >
> > Would setting the birth rate to zero be the same as killing every living
> > thing?
> >
> > Would turning the Earth into computronium be the same as destroying it? 
> > What
> > about global warming?  What physical changes count as "destroying" it?
> >
> > Who makes these decisions, you or the more intelligent AGI?  Suppose a 
> > godlike
> > AGI decides that the universe is a simulation.  Therefore there is no need
> 
> > to
> > preserve your memory by uploading because the simulation can always be run
> > again to recover your memories.  Do you go along?
> >
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49182579-be0803


RE: [agi] Religion-free technical content

2007-10-02 Thread Gary Miller
Josh asked,

>> Who could seriously think that ALL AGIs will then be built to be
friendly?

Children are not born friendly or unfriendly.  

It is as they learn from their parents that they develop their
socialization, their morals, their empathy, and even love.

I am sure that our future fathers here of fledgling AGIs are all bacically
decent human beings and will not abuse their creations.  By closely
monitoring their creations development and not letting them grow up too fast
they will detect and signs of behavioral problems and counsel their
creations tracing any disturbing goals or thoughts through the many
monitoring programs which were developed during their initial learning
period before they became concious.

-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 02, 2007 12:36 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content

Beyond AI pp 253-256, 339. I've written a few thousand words on the subject,
myself.

a) the most likely sources of AI are corporate or military labs, and not
just US ones. No friendly AI here, but profit-making and
"mission-performing" AI.

b) the only people in the field who even claim to be interested in building
friendly AI (SIAI) aren't even actually building anything. 

c) of all the people at the AGI conf last year who were trying to build AGI,
none of them had any idea how to make it friendly or even any coherent idea
what friendliness might really mean. Yet they're all building away.

d) same can be said of the somewhat wider group of builders on this mailing
list.

In other words, nobody knows what "friendly" really means, nobody's really
trying to build a friendly AI, and more people are seriously trying to build
an AGI every time I look. 

Who could seriously think that ALL AGIs will then be built to be friendly?

Josh

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007
6:59 PM
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49181345-065671


RE: [agi] Religion-free technical content

2007-10-02 Thread Gary Miller
A good AGI would rise above the ethical dilemma and solve the problem by 
inventing safe alternatives that were both more enjoyable and allowed the the 
individual to contribute to his future, his family and society while they were 
experiencing that enjoyment.  And hopefully not doing so in a way that made 
them Borg-like and creeping out the rest of humanity.

There is no reason that so many humans should have to go to work hating their 
jobs, their lives and making the lives of those around them unpleasurable as 
well.

Our neurochemicals should be regulatable to the point where we do not have to 
become vegetables and flirt with addiction and possibly death to enjoy life 
intensely.

-Original Message-
From: Jef Allbright [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, October 02, 2007 12:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content

 On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> A quick question for Richard and others -- Should adults be allowed to 
> drink, do drugs, wirehead themselves to death?

A correct response is "That depends."

Any "should" question involves consideration of the pragmatics of the system, 
while semantics may be not in question.  [That's a brief portion of the 
response I owe Richard from yesterday.]

Effective deciding of these "should" questions has two major elements:
 (1) understanding of the "evaluation-function" of the assessors with respect 
to these specified "ends", and (2) understanding of principles (of nature) 
supporting increasingly coherent expression of that evolving "evaluation 
function".

And there is always an entropic arrow, due to the change in information as 
decisions now incur consequences not now but in an uncertain future. [This is 
another piece of the response I owe Richard.]

[I'm often told I make everything too complex, but to me this is a coherent, 
sense-making model, excepting the semantic roughness of it's expression in this 
post.]

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or 
change your options, please go to:
http://v2.listbox.com/member/?&;

No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007 
6:59 PM
 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.488 / Virus Database: 269.13.37/1042 - Release Date: 10/1/2007 
6:59 PM
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49179069-338fc9

Motivatinal systems [was Re: [agi] Religion-free technical content

2007-10-02 Thread Linas Vepstas


On Sun, Sep 30, 2007 at 08:17:46PM -0400, Richard Loosemore wrote:
> 
> 1) Stability
> 
> The system becomes extremely stable because it has components that 
> ensure the validity of actions and thoughts.  Thus, if the system has 
> "acquisition of money" as one of its main sources of pleasure, and if it 
> comes across a situation in which it would be highly profitable to sell 
> its mother's house and farm to a property developer and selling its 
> mother into the whote slave trade, it may try to justify that this is 
> consistent with its feelings of family attachment because [insert some 
> twisted justification here].  But this is difficult to do because the 
> system cannot stop other parts of its mind from taking this excuse apart 
> and examining it, and passing judgement on whether this is really 
> consistent ... this is what cognitive dissonance is all about.  And the 
> more intelligent the system, the more effective these other processes 
> are.  If it is smart enough, it cannot fool itself with excuses.

You are presuming two things here:
A) that "thinking harder about things" is wired into the pleasure
   center. I beleive it should be, but a common human failure is to 
   fail to think through the consequences of an action. 

   The AGI pleasure center might be wired to "think deeply about
   the consequences of its actions, unless there is an urgent
   (emergency) need for action, in which cse, take the expediant
   solution." Thus, it might be easier/more expediant to concoct 
   a twisted justification for selling ones mother than it would 
   be to "do the right thing", especially if the emergency is that
   the property developer made an offer "good only today, so act 
   fast", and so AGI acted without thinking sufficiently about it.

   This seems like a trap that is not "obviously" avoidable.

The other thing you are presuming is:
B) the AGI will reason rationally from well-founded assumptions.

   Another common human failing is to create logically sound 
   arguments that are based on incorrect premesis.  Some folks
   think long and hard about something, and then reach insane
   conclusions (and act on those insane conclusions!) because the
   underlying assumptions were bad. 

   Sure, if you are smart, you should also double-check your underlying
   assumptions. But it is easy to be blind to the presence of faulty
   assumptions. I don't see just how AGI can avoid this trap either.

> 2) Immunity to shortcircuits
> 
> Because the adult system is able to think at such a high level about the 
> things it feels obliged to do, it can know perfectly well what the 
> consequence of various actions would be, including actions that involve 
> messing around with its own motivational system.

Huh?  This contradicts your earlier statements about complex systems.
The problem with things like "Conway's Game of Life", or e.g. a 
chaotic dynamical system is that it is hard/intractable to predict
the outcome that results from a minor change in the initial conditions.
The only way to find out what will happen is to make the change,
and then run the simulation.

That is, one CANNOT "know perfectly well what the consequence of
various actions would be" -- one might be able to make guesses,
but one cannot actually know for sure.

In particular, the only way that an AGI can find out what happens if it
changes its motivational system is to run a simulation of itself,
with the new motiviational system. That is, fork itself, and run 
a copy. Suppose its new copy turns out to be a psycho-killer?  Gee,
well,then, better shut the thing down!  Yow... ethical issues 
pop up everywhere on that path.

The only way one can "know for sure" is to create a mathematical
theorem that states that certain behaviour patterns are in the 
"basin of attraction" for the modified system. But I find it hard
to beleive that such a proof would be possible.

> So it *could* reach inside and redesign itself.  But even thinking that 
> thought would give rise to the realisation of the consequences, and 
> these would stop it.

I see a problem with the above. Its built on the assumption that 
an effective "pleasure center" wiring for one level of intelligence
will also be effective when one is 10x smarter.  Its not obvious to
me that, as AGI gets smarter, it might not "go crazy". The cause
for its "going crazy" may in fact be the very wiring of that pleasure
center ... and so now, that wiring needs to be adjusted.  

> In fact, if it knew all about its own design (and it would, eventually), 
> it would check to see just how possible it might be for it to 
> accidentally convince itself to disobey its prime directive, and if 
> necessary it would take actions to strengthen the check-and-balance 
> mechanisms that stop it from producing "justifications".  Thus it would 
> be stable even as its intelligence radically increased:  it might 
> redesign itself, but knowing the stability of its current design, and 
> the dangers of a

Re: [agi] Religion-free technical content

2007-10-02 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
Main assumption built into this statement: that it is possible to build 
an AI capable of doing anything except dribble into its wheaties, using 
the techiques currently being used.

I have explained elsewhere why this is not going to work.


I find your argument quotidian and lacking in depth. Virtually any of the 
salient properties of complex systems are true of any Turing-equivalent 
computational system -- non-linearity, sensitive dependence on initial 
conditions, provable unpredictability, etc. It's why complex systems can be 
simulated on computers. Computer scientists have been dealing with these 
issues for half a century and we have a good handle on what can and can't be 
done.


You know, Josh, there are times when only the bald truth will do.

What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
of complete technical ignorance, patronizing insults and breathtaking 
arrogance.


You did not understand word one of what was written in the paper.

It shows.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49174391-65e12b


Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser

Matt,

   You're missing the point.  Your questions are regarding interpretations 
as to whether or not certain conditions are equivalent to my statements, not 
challenges to my statements.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 7:12 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



--- Mark Waser <[EMAIL PROTECTED]> wrote:


> Do you really think you can show an example of a true moral universal?

Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including 
yourself.

Thou shalt not kill every living and/or sentient except yourself.


Suppose that the human race was destructively uploaded.  Is this the same 
as

killing the human race?  What if the technology was not perfect and only X
percent of your memories could be preserved?  What value of X is the 
threshold

for killing vs. saving the human race?

Would setting the birth rate to zero be the same as killing every living
thing?

Would turning the Earth into computronium be the same as destroying it? 
What

about global warming?  What physical changes count as "destroying" it?

Who makes these decisions, you or the more intelligent AGI?  Suppose a 
godlike
AGI decides that the universe is a simulation.  Therefore there is no need 
to

preserve your memory by uploading because the simulation can always be run
again to recover your memories.  Do you go along?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49169046-1a7eff


Re: [agi] Religion-free technical content

2007-10-02 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > Do you really think you can show an example of a true moral universal?
> 
> Thou shalt not destroy the universe.
> Thou shalt not kill every living and/or sentient being including yourself.
> Thou shalt not kill every living and/or sentient except yourself.

Suppose that the human race was destructively uploaded.  Is this the same as
killing the human race?  What if the technology was not perfect and only X
percent of your memories could be preserved?  What value of X is the threshold
for killing vs. saving the human race?

Would setting the birth rate to zero be the same as killing every living
thing?

Would turning the Earth into computronium be the same as destroying it?  What
about global warming?  What physical changes count as "destroying" it?

Who makes these decisions, you or the more intelligent AGI?  Suppose a godlike
AGI decides that the universe is a simulation.  Therefore there is no need to
preserve your memory by uploading because the simulation can always be run
again to recover your memories.  Do you go along?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49153832-f290df


RE: [agi] Religion-free technical content

2007-10-02 Thread Edward W. Porter
The below is a good post:

I have one major question for Josh.  You said

“PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS TO DO,
WITH
THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND TECHNIQUES. THAT'S

THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING, GÖDEL-INVOKING COMPLEX CORE
OF
THE WHOLE PROBLEM.”

Could you please elaborate on exactly what the “complex core of the whole
problem” is that you still think is currently missing.

Why for example would a Novamente-type system’s representations and
techniques not be capable of being self-referential in the manner you seem
to be implying is both needed and currently missing?

>From my reading of Novamente it would have a tremendous amount of
activation and representation of its own states, emotions, and actions.
In fact virtually every representation in the system would have weightings
reflecting its value to the system.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 02, 2007 4:39 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > a) the most likely sources of AI are corporate or military labs, and
> > not
just
> > US ones. No friendly AI here, but profit-making and
> > "mission-performing"
AI.
>
> Main assumption built into this statement: that it is possible to
> build
> an AI capable of doing anything except dribble into its wheaties, using
> the techiques currently being used.

Lots of smart people work for corporations and governments; why assume
they
won't advance the state of the art?

Furthermore, it's not clear that One Great Blinding Insight is necessary.
Intelligence evolved, after all, making it reasonable to assume that it
can
be duplicated by a series of small steps in the right direction.

> I have explained elsewhere why this is not going to work.

I find your argument quotidian and lacking in depth. Virtually any of the
salient properties of complex systems are true of any Turing-equivalent
computational system -- non-linearity, sensitive dependence on initial
conditions, provable unpredictability, etc. It's why complex systems can
be
simulated on computers. Computer scientists have been dealing with these
issues for half a century and we have a good handle on what can and can't
be
done.

> You can disagree with my conclusions if you like, but you did not
> cover
> this in Beyond AI.

The first half of the book, roughly, is about where and why classic AI
stalled
and what it needs to get going. Note that some dynamical systems theory is

included.

> > b) the only people in the field who even claim to be interested in
building
> > friendly AI (SIAI) aren't even actually building anything.
>
> That, Josh, is about to change.

Glad to hear it. However, you are now on the horns of a dilemma. If you
tell
enough of your discoveries/architecture to convince me (and the other more

skeptical people here) that you are really on the right track, all those
governments and corporations will take them (as Derek noted) and throw
much
greater resources at them than we can.

> So what you are saying is that I "[have no] idea how to make it
> friendly
> or even any coherent idea what friendliness might really mean."
>
> Was that your most detailed response to the proposal?

I think it's self-contradictory. You claim to have found a stable,
un-short-circuitable motivational architecture on the one hand, and you
claim
that you'll be able to build a working system soon because you have a way
of
bootstrapping on all the results of cog psych, on the other. But the prime

motivational (AND learning) system of the human brain is the
dopamine/reward-predictor error signal system, and it IS
short-circuitable.

> You yourself succinctly stated the final piece of the puzzle
> yesterday.
> When the first AGI is built, its first actions will be to make sure that

> nobody is trying to build a dangerous, unfriendly AGI.  After that
> point, the first friendliness of the first one will determine the
> subsequent motivations of the entire population, because they will
> monitor each other.

I find the hard take-off scenario very unlikely, for reasons I went into
at
some length in the book. (I know Eliezer likes to draw an analogy to
cellular
life getting started in a primeval soup, but I think the more apt parallel
to
draw is with the Cambrian Explosion.)

> The question is only whether the first one will be friendly:  any talk
> about "all AGIs" that pretends that there will be some other scenario is

> a meaningless.

A very loose and hyperbolic u

Re: [agi] Religion-free technical content

2007-10-02 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 01:20:54 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > a) the most likely sources of AI are corporate or military labs, and not 
just 
> > US ones. No friendly AI here, but profit-making and "mission-performing" 
AI.
> 
> Main assumption built into this statement: that it is possible to build 
> an AI capable of doing anything except dribble into its wheaties, using 
> the techiques currently being used.

Lots of smart people work for corporations and governments; why assume they 
won't advance the state of the art?

Furthermore, it's not clear that One Great Blinding Insight is necessary. 
Intelligence evolved, after all, making it reasonable to assume that it can 
be duplicated by a series of small steps in the right direction.

> I have explained elsewhere why this is not going to work.

I find your argument quotidian and lacking in depth. Virtually any of the 
salient properties of complex systems are true of any Turing-equivalent 
computational system -- non-linearity, sensitive dependence on initial 
conditions, provable unpredictability, etc. It's why complex systems can be 
simulated on computers. Computer scientists have been dealing with these 
issues for half a century and we have a good handle on what can and can't be 
done.

> You can disagree with my conclusions if you like, but you did not cover 
> this in Beyond AI.

The first half of the book, roughly, is about where and why classic AI stalled 
and what it needs to get going. Note that some dynamical systems theory is 
included. 

> > b) the only people in the field who even claim to be interested in 
building 
> > friendly AI (SIAI) aren't even actually building anything. 
> 
> That, Josh, is about to change.

Glad to hear it. However, you are now on the horns of a dilemma. If you tell 
enough of your discoveries/architecture to convince me (and the other more 
skeptical people here) that you are really on the right track, all those 
governments and corporations will take them (as Derek noted) and throw much 
greater resources at them than we can.

> So what you are saying is that I "[have no] idea how to make it friendly 
> or even any coherent idea what friendliness might really mean."
> 
> Was that your most detailed response to the proposal?

I think it's self-contradictory. You claim to have found a stable, 
un-short-circuitable motivational architecture on the one hand, and you claim 
that you'll be able to build a working system soon because you have a way of 
bootstrapping on all the results of cog psych, on the other. But the prime 
motivational (AND learning) system of the human brain is the 
dopamine/reward-predictor error signal system, and it IS short-circuitable. 
 
> You yourself succinctly stated the final piece of the puzzle yesterday. 
> When the first AGI is built, its first actions will be to make sure that 
> nobody is trying to build a dangerous, unfriendly AGI.  After that 
> point, the first friendliness of the first one will determine the 
> subsequent motivations of the entire population, because they will 
> monitor each other.

I find the hard take-off scenario very unlikely, for reasons I went into at 
some length in the book. (I know Eliezer likes to draw an analogy to cellular 
life getting started in a primeval soup, but I think the more apt parallel to 
draw is with the Cambrian Explosion.)

> The question is only whether the first one will be friendly:  any talk 
> about "all AGIs" that pretends that there will be some other scenario is 
> a meaningless.

A very loose and hyperbolic use of the word. There will be a wide variety of 
AIs, near AIs assisting humans and organizations, brain implants and 
augmentations, brain simulations getting ever closer to uploads, and so 
forth.
 
> Ease of construction of present day AI techniques:  zero, because they 
> will continue to fail in the same stupid way they have been failing for 
> the last fifty years.

Present-day techniques can do most of the things that an AI needs to do, with 
the exception of coming up with new representations and techniques. That's 
the self-referential kernel, the tail-biting, Gödel-invoking complex core of 
the whole problem. 

It will not be solved by simply shifting to a different set of techniques. 

> Method of construction of the only viable alternative to conventional 
> AI:  an implicitly secure type of AGI motivation (the one I have been 
> describing) in which the easiest and quickest-to-market type of design 
> is one which is friendly, rather than screwed up by contradictory 
> motivations.

As I mentioned, in humans the motivation and learning systems coincide or 
strongly overlap. I would give dollars to doughnuts that if you constrain the 
motivational system too much, you'll just build a robo-fundamentalist, immune 
to learning.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/me

Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser

> Do you really think you can show an example of a true moral universal?

Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including 
yourself.

Thou shalt not kill every living and/or sentient except yourself.


Mark, this is so PHIL101.  Do you *really* think there aren't people,
now or conceivable in another time, who could in full sincerity
violate any of these?   Think cult-mentality for starters.


   There are people who could in full sincerity violate *any* statement. 
All you have to be is stupid enough, deluded enough, dishonest enough, or 
any one of a thousand pathologies.  The (until now unrealized by me) PHIL 
101 is that you are arguing unrealistic infinities that serve no purpose 
other than mental masturbation.


   Do you *really* think that any of those three statements shouldn't be 
embraced regardless of circumstances (and don't give me ridiculous crap like 
"Well, if the universe was only inflicting suffering on everyone . . . . ")



- Original Message - 
From: "Jef Allbright" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 3:14 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> Do you really think you can show an example of a true moral universal?

Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including 
yourself.

Thou shalt not kill every living and/or sentient except yourself.


Mark, this is so PHIL101.  Do you *really* think there aren't people,
now or conceivable in another time, who could in full sincerity
violate any of these?   Think cult-mentality for starters.

I'm not going to cheerfully right you off now, but feel free to have
the last word as I can't afford any more time for this right now.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49034020-67ef16


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> I'm not going to cheerfully right you off now, but feel free to have the last 
> word.

Of course I meant "cheerfully write you off or ignore you."
- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49028437-a9e9f6


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > Do you really think you can show an example of a true moral universal?
>
> Thou shalt not destroy the universe.
> Thou shalt not kill every living and/or sentient being including yourself.
> Thou shalt not kill every living and/or sentient except yourself.

Mark, this is so PHIL101.  Do you *really* think there aren't people,
now or conceivable in another time, who could in full sincerity
violate any of these?   Think cult-mentality for starters.

I'm not going to cheerfully right you off now, but feel free to have
the last word as I can't afford any more time for this right now.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49026453-d502c2


Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser

Do you really think you can show an example of a true moral universal?


Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient except yourself.

- Original Message - 
From: "Jef Allbright" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 2:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

Wrong.  There *are* some absolute answers.  There are some obvious 
universal

"Thou shalt not"s that are necessary unless you're rabidly anti-community
(which is not conducive to anyone's survival -- and if you want to argue
that community survival isn't absolute, then I'll just cheerfully ignore
you).


I see "community", or rather cooperation, as essential to morality.
(Quite ironic that you would write someone off for disagreement over
the value of community!)

As for moral absolutes, they fail due the non-existence of an absolute 
context.


As for moral universals, humans can get pretty close to universal
agreement on some key principles, but this is true only to the extent
that we share common values-assessment function based on shared
cultural, genetic, physical... heritage.)  So not quite universal now,
and bound to broaden.  Do you really think you can show an example of
a true moral universal?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49022660-de0d50


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> Wrong.  There *are* some absolute answers.  There are some obvious universal
> "Thou shalt not"s that are necessary unless you're rabidly anti-community
> (which is not conducive to anyone's survival -- and if you want to argue
> that community survival isn't absolute, then I'll just cheerfully ignore
> you).

I see "community", or rather cooperation, as essential to morality.
(Quite ironic that you would write someone off for disagreement over
the value of community!)

As for moral absolutes, they fail due the non-existence of an absolute context.

As for moral universals, humans can get pretty close to universal
agreement on some key principles, but this is true only to the extent
that we share common values-assessment function based on shared
cultural, genetic, physical... heritage.)  So not quite universal now,
and bound to broaden.  Do you really think you can show an example of
a true moral universal?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49017951-47277b


Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
On 10/2/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 10/2/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> > On 10/2/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> > > Argh!  "Goal system" and "Friendliness" are roughly the same sort of
> > > confusion.  They are each modelable only within a ***specified***,
> > > encompassing context.
> > >
> > > In more coherent, modelable terms, we express our evolving nature,
> > > rather than strive for "goals."
> > >
> >
> > Terminology. Note that I did talk about subproblems of 'goal system':
> > 'goal content' (textual description, such as Eliezer's CV) and
> > property of system itself to behave according to this 'goal content'.
> > Word 'goal' is a functional description, it doesn't limit design
> > choices.
> > What do you mean by context here? Certainly goal content needs
> > semantic grounding in system's knowledge.
>
> Fundamental systems theory. Any system can be effectively specified
> only within a more encompassing context.  Shades of Godel's theorem
> considering the epistemological implications.  So it's perfectly valid
> to speak of goals within an effectively specified context, but it's
> incoherent to speak of a supergoal of friendliness as if that
> expression has a modelable referent.
>
> Goals, like free-will, are a property of the observer, not the observed.
>
> When I speak of context, I'm generally not talking semantics but
> pragmatics; not meaning, but "what works"; not linguistics, but
> systems.

In this case there is no distinction: semantics is a label for
perception machinery; that machinery implements integration of
behavior, experience and 'goal content'. Goalness of goals is in
existing in form of separate subsystem prior to integration
(learning).

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49015315-f3791f


Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser

You already are and do, to the extent that you are and do.  Is my
writing really that obscure?


It looks like you're veering towards CEV . . . . which I think is a *huge* 
error.  CEV says nothing about chocolate or strawberry and little about 
great food or mediocre sex.



The pragmatic point is that there are no absolute answers, but you
absolutely can improve the process (from any particular subjective
point of view.)


Wrong.  There *are* some absolute answers.  There are some obvious universal 
"Thou shalt not"s that are necessary unless you're rabidly anti-community 
(which is not conducive to anyone's survival -- and if you want to argue 
that community survival isn't absolute, then I'll just cheerfully ignore 
you).


- Original Message - 
From: "Jef Allbright" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 1:34 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> Effective deciding of these "should" questions has two major elements:
> (1) understanding of the "evaluation-function" of the assessors with
> respect to these specified "ends", and (2) understanding of principles
> (of nature) supporting increasingly coherent expression of that
> evolving "evaluation function".

So how do I get to be an assessor and decide?


You already are and do, to the extent that you are and do.  Is my
writing really that obscure?

The pragmatic point is that there are no absolute answers, but you
absolutely can improve the process (from any particular subjective
point of view.)

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49012992-637825


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On 10/2/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> > Argh!  "Goal system" and "Friendliness" are roughly the same sort of
> > confusion.  They are each modelable only within a ***specified***,
> > encompassing context.
> >
> > In more coherent, modelable terms, we express our evolving nature,
> > rather than strive for "goals."
> >
>
> Terminology. Note that I did talk about subproblems of 'goal system':
> 'goal content' (textual description, such as Eliezer's CV) and
> property of system itself to behave according to this 'goal content'.
> Word 'goal' is a functional description, it doesn't limit design
> choices.
> What do you mean by context here? Certainly goal content needs
> semantic grounding in system's knowledge.

Fundamental systems theory. Any system can be effectively specified
only within a more encompassing context.  Shades of Godel's theorem
considering the epistemological implications.  So it's perfectly valid
to speak of goals within an effectively specified context, but it's
incoherent to speak of a supergoal of friendliness as if that
expression has a modelable referent.

Goals, like free-will, are a property of the observer, not the observed.

When I speak of context, I'm generally not talking semantics but
pragmatics; not meaning, but "what works"; not linguistics, but
systems.

[I want to apologize to the list. I'm occasionally motivated to jump
in where I imagine I see some fertile ground to plant a seed of
thought, but due to pressures of work I'm unable to stay and provide
the appropriate watering and tending needed for its growth.]

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48987856-77b6a9


RE: [agi] Religion-free technical content

2007-10-02 Thread Derek Zahn
Richard Loosemore:> > a) the most likely sources of AI are corporate or 
military labs, and not just > > US ones. No friendly AI here, but profit-making 
and "mission-performing" AI.> > Main assumption built into this statement: that 
it is possible to build > an AI capable of doing anything except dribble into 
its wheaties, using > the techiques currently being used.> > I have explained 
elsewhere why this is not going to work.
 
If your explanations are convincing, smart people in industry and the military 
might just absorb them and then they still have more money and manpower than 
you do.
> When the first AGI is built, its first actions will be to make sure that > 
> nobody is trying to build a dangerous, unfriendly AGI. 
 
I often see it assumed that the step between "first AGI is built" (which I 
interpret as a functoning model showing some degree of generally-intelligent 
behavior) and "god-like powers dominating the planet" is a short one.  Is that 
really likely?
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48961147-810f59

Re: [agi] Religion-free technical content

2007-10-02 Thread Vladimir Nesov
On 10/2/07, Jef Allbright <[EMAIL PROTECTED]> wrote:
> Argh!  "Goal system" and "Friendliness" are roughly the same sort of
> confusion.  They are each modelable only within a ***specified***,
> encompassing context.
>
> In more coherent, modelable terms, we express our evolving nature,
> rather than strive for "goals."
>

Terminology. Note that I did talk about subproblems of 'goal system':
'goal content' (textual description, such as Eliezer's CV) and
property of system itself to behave according to this 'goal content'.
Word 'goal' is a functional description, it doesn't limit design
choices.
What do you mean by context here? Certainly goal content needs
semantic grounding in system's knowledge.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48961085-eeffe2


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > Effective deciding of these "should" questions has two major elements:
> > (1) understanding of the "evaluation-function" of the assessors with
> > respect to these specified "ends", and (2) understanding of principles
> > (of nature) supporting increasingly coherent expression of that
> > evolving "evaluation function".
>
> So how do I get to be an assessor and decide?

You already are and do, to the extent that you are and do.  Is my
writing really that obscure?

The pragmatic point is that there are no absolute answers, but you
absolutely can improve the process (from any particular subjective
point of view.)

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48959633-3f7f85


Re: [agi] Religion-free technical content

2007-10-02 Thread Mark Waser

Effective deciding of these "should" questions has two major elements:
(1) understanding of the "evaluation-function" of the assessors with
respect to these specified "ends", and (2) understanding of principles
(of nature) supporting increasingly coherent expression of that
evolving "evaluation function".


So how do I get to be an assessor and decide?

- Original Message - 
From: "Jef Allbright" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 12:55 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:


A quick question for Richard and others -- Should adults be allowed to
drink, do drugs, wirehead themselves to death?


A correct response is "That depends."

Any "should" question involves consideration of the pragmatics of the
system, while semantics may be not in question.  [That's a brief
portion of the response I owe Richard from yesterday.]

Effective deciding of these "should" questions has two major elements:
(1) understanding of the "evaluation-function" of the assessors with
respect to these specified "ends", and (2) understanding of principles
(of nature) supporting increasingly coherent expression of that
evolving "evaluation function".

And there is always an entropic arrow, due to the change in
information as decisions now incur consequences not now but in an
uncertain future. [This is another piece of the response I owe
Richard.]

[I'm often told I make everything too complex, but to me this is a
coherent, sense-making model, excepting the semantic roughness of it's
expression in this post.]

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48955662-36b85e


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
On 10/2/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > You misunderstood me -- when I said robustness of the goal system, I meant
> > the contents and integrity of the goal system, not the particular
> > implementation.
>
> I meant that too - and I didn't mean to imply this distinction.
> Implementation of goal system, or 'goal system itself' are both in my
> argument can be represented as text written in natural language, that
> is in rather faulty way. 'Goal system as what it meant to be' is what
> intelligent system tries to achieve.
>
> >
> > I do however continue to object to your phrasing about the system
> > recognizing influence on it's goal system and preserving it.  Fundamentally,
> > there are only a very small number of "Thou shalt not" supergoals that need
> > to be forever invariant.  Other than those, the system should be able to
> > change it's goals as much as it likes (chocolate or strawberry?  excellent
> > food or mediocre sex?  save starving children, save adults dying of disease,
> > or go on vacation since I'm so damn tired from all my other good works?)
>
> This is just substitution of levels of abstraction. Programs on my PC
> run on a fixed hardware and are limited by its capabilities, yet they
> can vary greatly. Plus, intelligent system should be able to integrate
> impact of goal system on multiple levels of abstraction, that is it
> can infer level of strictness in various circumstances (which
> interface with goal system through custom-made abstractions).
>
>
> > A quick question for Richard and others -- Should adults be allowed to
> > drink, do drugs, wirehead themselves to death?
> >
> > - Original Message -
> > From: "Vladimir Nesov" <[EMAIL PROTECTED]>
> > To: 
> > Sent: Tuesday, October 02, 2007 9:49 AM
> > Subject: **SPAM** Re: [agi] Religion-free technical content
> >
> >
> > > But yet robustness of goal system itself is less important than
> > > intelligence that allows system to recognize influence on its goal
> > > system and preserve it. Intelligence also allows more robust
> > > interpretation of goal system. Which is why the way particular goal
> > > system is implemented is not very important. Problems lie in rough
> > > formulation of what goal system should be (document in English is
> > > probably going to be enough) and in placing the system under
> > > sufficient influence of its goal system (so that intelligent processes
> > > independent on it would not take over).
> > >
> > > On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> > >> The intelligence and goal system should be robust enough that a single or
> > >> small number of sources should not be able to alter the AGI's goals;
> > >> however, it will not do this by recognizing "forged communications" but
> > >> by
> > >> realizing that the aberrant goals are not in congruence with the world.
> > >> Note that many stupid and/or greedy people will try to influence the
> > >> system
> > >> and it will need to be immune to them (or the solution will be worse than
> > >> the problem).

Argh!  "Goal system" and "Friendliness" are roughly the same sort of
confusion.  They are each modelable only within a ***specified***,
encompassing context.

In more coherent, modelable terms, we express our evolving nature,
rather than strive for "goals."

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48954318-fd558f


Re: [agi] Religion-free technical content

2007-10-02 Thread Jef Allbright
 On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:

> A quick question for Richard and others -- Should adults be allowed to
> drink, do drugs, wirehead themselves to death?

A correct response is "That depends."

Any "should" question involves consideration of the pragmatics of the
system, while semantics may be not in question.  [That's a brief
portion of the response I owe Richard from yesterday.]

Effective deciding of these "should" questions has two major elements:
 (1) understanding of the "evaluation-function" of the assessors with
respect to these specified "ends", and (2) understanding of principles
(of nature) supporting increasingly coherent expression of that
evolving "evaluation function".

And there is always an entropic arrow, due to the change in
information as decisions now incur consequences not now but in an
uncertain future. [This is another piece of the response I owe
Richard.]

[I'm often told I make everything too complex, but to me this is a
coherent, sense-making model, excepting the semantic roughness of it's
expression in this post.]

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48940364-66d769


  1   2   >