Re: [singularity] Quantum Mechanics and Consciousness

2008-05-21 Thread Richard Loosemore


I'm sorry, but this is not getting back to established scientific 
theory - I have met Evan Harris Walker, have read his book (at least in 
the draft form that he gave me in 1982) and as a physicist I know enough 
quantum mechanics to know that his theory is complete bunkum.


There is no need for quantum tunneling across synapses, because there 
are well established molecular transport mechanisms that do this.  Who 
cares about one electron in a billion getting across when several 
billion electrons, and other junk, gets across by quite ordinary means?


Also, his theory is like many others in that it explains consciousness 
before it even says what consciousness is.  A bit like having a 
complete, quantum mechanical theory of the Easter Bunny before being 
able to specify exactly what the bunny is supposed to be.


I believe there are viable theories of what consciousness is, and what 
its explanation is (you will have to wait for my book to come out before 
you see why I would be so confident), so if you are anxious that a 
future AI should have consciousness, I believe this can easily be arranged.



Richard Loosemore




Bertromavich Edenburg wrote:

For Virtual AI or General
I think if we need to make the AI's morelike us and able to function as 
human beings we need to give them basicequations that make them think.

Like These:

QUANTUM MECHANICS AND CONSCIOUSNESS

Getting back to established scientific theory, normal waking
consciousness occurs when the nerve cell firing rate (synaptic
switching rate) is high enough to spread out the waves associated
with electrons to fill the gaps between nerve cells (synaptic
clefts) with waves of probability of similar amplitude. This is
described mathematically by the quantum mechanical mechanism of
tunneling. These waves are interconnected throughout regions of
the brain through resonances, resulting in a large, complex,
unified, quantum mechanically defined resonance matrix filling a
region in the brain. The waves are interconnected with each
other and with information storage and sensory input mechanisms
within these regions of the brain.

861

The nerve cell firing rate (v') at which this occurs has been
modeled mathematically by Evan Harris Walker (at the U.S. Army
Ballistics Center at Aberdeen Proving Ground) and corresponds to
the threshold between waking and sleeping consciousness in people
and animals. For normal waking consciousness to exist, the
synapse transmission frequency for the brain (v') must satisfy
the condition:

2/3
v' must be greater than or equal to N /T


where:

N = The total number of synapses in the brain (in humans,
about 5E11)

T = Synaptic transmission delay time (the time interval
required for the propagation of the excitation energy
from one synapse to another)


This theory ascribes consciousness to an association of the
events occurring at any one synapse with events occurring at
other synapses in the brain by means of a quantum mechanical
propagation of information. The sense of individual identity is
an aspect of the continuity of the wave matrix residing in the
brain [4].

862

QUANTUM MECHANICS AND PSYCHOKINESIS

By merely observing a phenomenon (resonating ones brain with it)
one can affect the outcome, since the physical mechanisms in your
brain are part of the wave matrix described by quantum mechanics.
The information handling rate in resonance determines the amount
of effect, along with the elapsed time of resonance and the
probability distribution of the phenomenon you are observing
[5]. According to Evan Harris Walker, quantum mechanical state
selection can be biased by an observer if [5]:


W te is greater than or equal to -Log P(Qo-Qi)
Q 2

where:


P(Qo-Qi) = Probability that state Qi will occur by chance
alone

W = Information handling rate in process in brain
Q associated with state vector selection (bits/sec)

te = Elapsed time

Q = Overall state vector

Qo = Initial physical state of system

Qi = State that manifests 'paranormal' target event


The effect of consciousness is incredibly small on macroscopic
systems; but it can be measurable when it occurs on quantum
mechanically defined and divergent systems, where a slight change
can amplify itself as it propagates through the system. The
effect is about 1E-17 degrees on the angle of the bounce of cubes
going down an inclined plane. Changes in the angle of bounce
result in changes in displacement of the cubes that increase
about 50% on every bounce, and the effect is measurable after
many bounces [6]. The theory successfully and quantitatively
modeled the differing amounts of displacement observed in
experiments on cubes of different weights and weight
distributions [5].

Walker also modeled information retrieval in 'guess the card'
experiments. Simple, classical, random chance would predict a
smooth, binomial curve for the probabilities of getting the right
answer versus the number of subjects making successful
predictions at these probabilities. Walker's

Re: [singularity] New list announcement: fai-logistics

2008-04-26 Thread Richard Loosemore


Thomas,

The argument I presented is *not* a restatement of Rice's theorem, 
because the concept of the size of a scientific theory is not something 
that maps onto the parallel concept of the size of a functions, 
algorithm or program.


In order to map theory-size onto algorithm-size it would be necessary to 
PRESUPPOSE the answer to the question that is driving these 
considerations about scientific theories.





Richard Loosemore












Thomas McCabe wrote:

On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins [EMAIL PROTECTED] wrote:

Thomas McCabe wrote:


Does NASA have a coherent response to the moon hoax theory?


 This is completely uncalled for.  No particular theory of AGI at this time
disserves to be compared to the moon hoax conspiracy theory or
alternatively, they all do.  :-)


Obviously, Richard's theories are not as nonsensical as the moon hoax
nutcases. It was simply the first example that sprang to mind.


 Of course
not; it isn't worth their time. This was used against NASA by the moon
hoaxers for years, until independent astronomers started posting
rebuttals. You must show that your theory is credible, or at least
reasonably popular, before people will take the time to refute it.




 Popularity is irrelevant.


Popularity, of course, is not the ultimate judge of accuracy. But are
you seriously claiming that how many people support a theory is
totally uncorrelated with the accuracy of said theory? Even after the
theory has been debated for years?


 While I am not an AGI researcher I occasionally
notice where the weak spots in various theories are and speak up
accordingly.   There is no way I consider Richard Loosemore to be some kind
of crackpot.His theories appear as valid as any I have read from
Eliezer.


If you look at Richard Loosemore's blog (http://susaro.com/), you will
essentially find an extremely-long-winded restatement of Rice's
Theorem (http://en.wikipedia.org/wiki/Rice's_theorem), and a
non-canonical redefinition of the word complexity. This is not a
great deal of intellectual content, considering the volume of
Richard's posts. Compare one of his five-page posts to, say, this
statement of a general workaround by Eliezer:

Rice's Theorem states that it is not possible to distinguish whether an
arbitrary computer program implements any function, including, say, simple
multiplication. And yet, despite Rice's Theorem, modern chip engineers
create computer chips that implement multiplication. Chip engineers select,
from all the vast space of possibilities, only those chip designs which they
can understand.


   Unless I missed a major development Eliezer's FAI theory is not
at a point where its validity can be reasonably confidently judged.
 Actually a true pet theory by your definition might well be the one
breakthrough wild idea that turns out to work.  I think it is much too early
to be dismissive of anything beyond obvious nonsense.

 If fai-logistics is not in fact working off Eliezer's ideas then exactly
what is the group using as its starting basis?


 - samantha



 ---
 singularity
 Archives: http://www.listbox.com/member/archive/11983/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/11983/
 Modify Your Subscription:
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com







---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore

Rolf Nelson wrote:

what is this distinction you are making between logistics and direct
solutions?  Especially given that there is much debate about how to
implement friendliness.


By logistics, I mean trying to get talented and motivated people
working on the problem in ways that match their skills.


 And why do you make that reference to the possibility of someone bringing
their pet theory to the list?


The logistics list is not the place to debate FAI theory. I mention
pet theories specifically because if nobody besides you accepts your
theory, the logistics of implementing that theory are not going to be
of interest to anyone.


What do you mean by a pet theory?


If you have to ask...


Well, since you put it that way, I will explain why I ask.

The only people that I know of who are doing what they call FAI Theory 
are people associated with Eliezer Yudkowsky's ideas.


That thing that he calls FAI Theory is not actually a theory (there is 
no systematic plan to ensure friendliness, nor even a theoretical basis 
on which such a plan could be devised), it is only an intention to try a 
particular approach to the FAI problem.


The particular approach behind Yudkosky's FAI Theory was questioned 
(as you know, by me), but that challenge was met by an astonishing 
outburst of irrational ranting and posturing, by Yudkowsky and his 
associates, and that outburst has permanently damaged their credibility. 
 After that outburst, Yudkosky made an attempt to silence the challenge 
to his ideas by banning all discussion of the topic on his SL4 list.


These people now refer to this challenge using language such as calling 
it a pet theory of one individual, and by making claims like nobody 
accepts that theory except that one individual.  This is not the 
behavior of mature scientists or engineers interested in solving 
problems:  you don't refer to an opposing point of view by denigrating 
the individual responsible for it.


Given that there has been a challenge to the very specific ideas that 
Yudkowsky calls FAI Theory, and given the childish response to that 
challenge, it is quite laughable that someone could set up a discussion 
list to handle the logistics of working on it, whilst specifically 
excluding any discussion of whether or not the thing called FAI theory 
has any content at all.


Such a discussion list would be just another exclusive club for people 
dedicated to spineless Yudkowsky-worship.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore

Thomas McCabe wrote:

On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 Such a discussion list would be just another exclusive club for people
dedicated to spineless Yudkowsky-worship.

 Richard Loosemore



Eli's not a member of fai-logistics, and I don't think he even knows
about it yet. As for your challenge, you have to convince other people
that your theory is of some merit before people will consider it
seriously.



People *have* taken both the challenge and my theory (two different 
things, notice) seriously.  They just do not belong to the SIAI group, 
and they do not make their views known in these public fora.  They just 
communicate with me offlist.


Your suggestion that I have to convince people of its merits before they 
take it seriously is rather naive (if you will forgive me for saying 
so).  The particular kind of people who are prepared to defend their 
views with any amount of incoherent rationalization, and who will use 
any amount of personal slander to attack people they dislike, are not 
going to be convinced of anything if it does not suit them, nor will 
they ever take it seriously.


I am quite content, at this stage, to point to the sum total of all the 
responses to my suggestion, and let others judge the situation as it stands.


I will elaborate on the ideas when I have written down the framework 
that allows people to understand the ideas easily.  All in good time.





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore

Thomas McCabe wrote:

On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Thomas McCabe wrote:


On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:


 Such a discussion list would be just another exclusive club for people
dedicated to spineless Yudkowsky-worship.

 Richard Loosemore



Eli's not a member of fai-logistics, and I don't think he even knows
about it yet. As for your challenge, you have to convince other people
that your theory is of some merit before people will consider it
seriously.



 People *have* taken both the challenge and my theory (two different things,
notice) seriously.  They just do not belong to the SIAI group, and they do
not make their views known in these public fora.  They just communicate with
me offlist.


Who are these people? Investors? Colleagues? Interested amateurs?


 Your suggestion that I have to convince people of its merits before they
take it seriously is rather naive (if you will forgive me for saying so).


If *someone* isn't convinced of its merits, what separates your idea
from Gene Ray's Time Cube?


The particular kind of people who are prepared to defend their views with
any amount of incoherent rationalization, and who will use any amount of
personal slander to attack people they dislike, are not going to be
convinced of anything if it does not suit them, nor will they ever take it
seriously.


Not everyone has to take your idea seriously (look at evolution!), but
you must be able to convince *someone* of the merits of your work.
Even crazy cult leaders can attract large followings.


You repeatedly insinuate, in your comments above, that the idea is not 
taken seriously by anyone, in spite of the fact I have already made it 
quite clear that this is false.


Can you explain why you do this.  Why, when you get the clear answer 
yes, do you continue to make remarks that amount to If the answer is 
'no', then?


Feel free to talk about the ideas themselves, if you are wish:  your 
personal opinion about the relative popularity of the ideas is a waste 
of time, given your obvious bias.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] New list announcement: fai-logistics

2008-04-18 Thread Richard Loosemore

Thomas McCabe wrote:

On 4/18/08, Richard Loosemore [EMAIL PROTECTED] wrote:

 You repeatedly insinuate, in your comments above, that the idea is not
taken seriously by anyone, in spite of the fact I have already made it quite
clear that this is false.



The burden of proof is on you to show that someone takes your ideas
seriously. You have yet to link to a paper commenting on your work, or
a paper citing your work, or a blog which makes use of your ideas,
etc., etc.


Remember, the 'idea' at issue right now is the *challenge* that I issued 
to Eliezer's approach to FAI.


If someone issues a challange to a set of ideas, the appropriate 
response is not Does anyone agree with the idea of this challenge?, 
but Does the challenged party have a coherent response to this challenge?.


You keep trying to change start a popularity context, to see how many 
people like the idea of the challenge.  I cannot think of anything more 
silly:  you just address the challenge itself and look at how the 
challenged party reacted.


I am perfectly happy to let the ideas stand for themselves, and to point 
to the contrast between (a) the clear articulation of those ideas that I 
made, and (b) the incoherent (and sometimes rabidly irrational) reaction 
to those ideas.  That contrast speaks volumes.


It is always a bad sign when a person like yourself is incapable of 
debating the actual issues themselves, but has to resort to childish 
strategies like demanding to see authority figures who like the ideas.





Richard Loosemore






---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


[singularity] An Open Letter to AGI Investors

2008-04-16 Thread Richard Loosemore


I have stuck my neck out and written an Open Letter to AGI (Artificial 
General Intelligence) Investors on my website at http://susaro.com.


All part of a campaign to get this field jumpstarted.

Next week I am going to put up a road map for my own development project.




Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job.   If that 
is all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it 
is also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to 
their opinions about the future of AGI. Ben is entitled to his view that 
it will only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly 
make *reasonable* predictions about how long it will take to solve the 
rest - predictions that anyone, including yourself should take 
seriously- especially if you've got any sense, any awareness of AI's 
long, ridiculous and incorrigible record of crazy predictions here, (and 
that's by Minsky's  Simon's as well as lesser lights) - by people also 
making predictions without having solved any of AGI's problems. All 
investors beware. Massive health  wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the 
human brain/body is the most awesomely complex machine in the known 
universe, the product of billions of years of evolution.  To emulate it, 
or parallel its powers, is going to take more like many not just 
trillions but zillions of dollars - many times global output, many, 
many Microsoft's. Now right now that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, 
there's not a lot of point in further discussion, is there? Nobody's 
really gaining from it, are they? It's just masturbation, isn't it?


Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
 If you disagree with that, specify exactly what you mean by a problem 
of AGI, and let us list them.  I have discovered the complex systems 
problem:  this is a major breakthrough.  You cannot understand it, or 
why it is a major breakthrough, but that makes no odds.


Everything you say in this post is based on your own ignorance of what 
AGI actually is.  What you are really saying is Nobody has been able to 
make me understand what AGI has achieved, so AGI is useless.


Sorry, but your posts are sounding more and more like incoherent rants.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:



Mike,

Your comments are irresponsible.  Many problems of AGI have been 
solved. If you disagree with that, specify exactly what you mean by a 
problem of AGI, and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


In your ignorance, you named a set of targets, not a set of problems. 
 If you want to see these fully functioning, you will see them in the 
last year of a 10-year AGI project  but if we listed to you, the 
first nine years of that project would be condemned as a complete waste 
of time.


If, on the other hand, you want to see an *in* *principle* solution (an 
outline of how these can all be implemented), then these in principle 
solutions are all in existence.  It is just that you do not know them, 
and when we go to the trouble of pointing them out to you (or explaining 
them to you), you do not understand them for what they are.




[By all means let's identify some more unsolved problems BTW..]

I think Ben  I more or less agreed that if he had really solved 1) - if 
his pet could really independently learn to play hide-and-seek after 
having been taught to fetch, it would constitute a major breakthrough, 
worthy of announcement to the world. And you can be sure it would be 
provoking a great deal of discussion.


As for your discoveries,fine, have all the self-confidence you want, 
but they have had neither public recognition nor, as I understand, 
publication 


Okay, stop rght there.

This is a perfect example of the nonsense you utter on this list:  you 
know that I have published a paper on the complex systems problem 
because you told me recently that you have read the paper.


But even though you have read this published paper, all you can do when 
faced with the real achievement that it contains is to say that (a) you 
don't understand it, and (b) this published paper that you have already 
read  has not been published!


Are there no depths to which you will not stoop?



Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Derek Zahn wrote:

Ben Goertzel:

  Yes -- it is true, we have not created a human-level AGI yet. No serious
  researcher disagrees. So why is it worth repeating the point?

Long ago I put Tintner in my killfile -- he's the only one there, and 
it's regrettable but it was either that or start taking blood pressure 
medicine... so *plonk*.  It's not necessarily that I disagree with most 
of his (usually rather obvious) points or think his own ideas (about 
image schemas or whatever) are worse than other stuff floating around, 
but his toxic personality makes the benefit not worth the cost.  Now I 
only have to suffer the collateral damage in responses.


Yes, he was in my killfile as well for a long time, then I decided to 
give him a second chance.  Now I am regretting it, so back he goes ... 
*plonk*.


Mike:  the only reason I am now ignoring you is that you persistently 
refuse to educate yourself about the topics discussed on this list, and 
instead you just spout your amateur opinions as if they were fact.  Your 
inability to distinguish real science from your amateur opinion is why, 
finally, I have had enough.


I apologize to the list for engaging him.  I should have just ignored 
his ravings.




However, I went to the archives to fetch this message.   I do think it 
would be nice to have tests or problems that one could point to as 
partial progress... but it's really hard.  Any such things have to be 
fairly rigorously specified (otherwise we'll argue all day about whether 
they are solved or not -- see Tintner's Creativity problem as an 
obvious example), and they need to not be AGI complete themselves, 
which is really hard.  For example, Tintner's Narrative Visualization 
task strikes me as needing all the machinery and a very large knowledge 
base so by the time a system could do a decent job of this in a general 
context it would already have demonstrably solved the whole thing.


It looks like you, Ben and I have now all said exactly the same thing, 
so we have a strong consensus on this.



The other common criticism of tests is that they can often be solved 
by Narrow-AI means (say, current face recognizers which are often better 
at this task than humans).  I don't necessarily think this is a 
disqualification though... if the solution is provided in the context of 
a particular architecture with a plausible argument for how the system 
could have produced the specifics itself, that seems like some sort of 
progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to 
measure the ease with which the system can be adapted by its builders to 
solve narrow AI problems -- sort of a cognitive enhancement 
measurement.  Such an approach makes a decent programming language and 
development environment be a tangible early step toward AGI but maybe 
that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my 
opinion to this field.  I can't think of any though, and they might not 
exist.  If it is in fact impossible to find such tasks, what does that 
say about AGI as an endeavor?


My own feeling about this is that when a set of ideas start to gel into 
one coherent approach to the subject, with a description of those ideas 
being assembled as a book-length manuscript, and when you read those 
ideas and they *feel* like progress, you will know that substantial 
progress is happening.


Until then, the only people who might get an advanced feeling that such 
a work is on the way are the people on the front lines, you see all the 
pieces coming together just before they are assembled for public 
consumption.


Whether or not someone could write down tests of progress ahead of that 
point, I do not know.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] A more accessible summary of the CSP

2008-04-13 Thread Richard Loosemore


Since I am making an effort to get a good chunk of stuff written this 
week and next, I want to let y'all know when I put out new stuff...


I have written a short, accessible summary of the CSP argument on my 
blog, as a preparation for the next phase tomorrow.


Hopefully this one will not be as demading as the last (a few hundred 
words instead of 4,200).





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Blog essay on the complex systems problem

2008-04-11 Thread Richard Loosemore


I have just finished producing a blog post that describes the complex 
systems problem in what I hope will be a more accessible form than the 
paper that I wrote before.


I am gradually working up to newer topics, but I have to lay the 
groundwork by giving a definitive version of some of the ideas I have 
written about elsewhere.




Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?

Either will do:  your suggestion achieves neither.

If I ask your non-AGI the following question:  How can I build an AGI 
that can think at a speed that is 1000 times faster than the speed of 
human thought? it will say:


Hi, my name is Ben and I just picked up your question.  I would
 love to give you the answer but you have to send $20 million
 and give me a few years.

That is not the answer I would expect of an AGI.  A real AGI would do 
original research to solve the problem, and solve it *itself*.


Isn't this, like, just too obvious for words?  ;-)


Your question is not well formed.  Computers can already think 1000 times
faster than humans for things like arithmetic


You just trivialized the definition of think by using it to describe a 
pocket calculator as a thinking system.  The whole point of the term 
AGI is that it refers to a general intelligence, not a pocket calculator.



Does your AGI also need to
know how to feed your dog?


No, because I don't have a dog.


Or should it guess and build it anyway?


Excuse me?


I would
think such a system would be dangerous.


A dog-feeding AGI?  Chilling, I agree.



I expect a competitive message passing network to improve over time.


I expect the design of design of teapots will improve over time, but I 
will never be tempted to conclude that they will therefore become as 
intelligent as an AGI.



Early
versions will work like an interactive search engine.  You may get web pages
or an answer from another human in real time, and you may later receive
responses to your persistent query.  If your question can be matched to an
expert in some domain that happens to be on the net, then it gets routed
there.  Google already does this.  For example, if you type an address, it
gives you a map and offers driving directions.  If you ask it how many
teaspoons in a cubic parsec? it will compute the answer (try it).  It won't
answer every question, but with 1000 times more computing power than Google, I
expect there will be many more domain experts.


You just fell for the Low-Hanging Fruit Fallacy.

When a computer processes a request like how many teaspoons in a cubic 
parsec? it can extract the meaning of the question by a relatively 
simple set of syntactic rules and question templates.


But when you ask it a question like how many dildos are there on the 
planet? [Try it] you find that google cannot answer this superficially 
similar question because it requires more intelligence in the 
question-analysis mechanism.  The first reply is a web page that starts 
off How many men have fake women’s asses at home? and as far as I 
can see there are no hits that answer the question.


Just because it can get the low-hanging fruit (the easy-to-parse 
questions) does not mean that it is straightforward to get a system that 
answers more and more sophisticated questions.  There is no reason 
whatsoever to assume that increases in hardware or tweaking of search 
algorithms will raise the system to the level where it will be able to 
answer the question without asking a human.


And if that question is not too much for the system, we can just up the 
ante to the one I mentioned before:  How do I build a 1000x AGI




I expect as hardware gets more powerful, peers will get better at things like
recognizing people in images, writing programs, and doing original research.


Peers?  People or machines?  You are not worried about the possibility 
that the number of questions (especially questions like hey dude, how 
can I pull a hot babe?) might exceed the number of experts with the time 
to answer them?



I don't claim that I can solve these problems.  I do claim that there is an
incentive to provide these services and that the problems are not intractable
given powerful hardware, and therefore the services will be provided.


I submit that you know nothing of the sort:  the problem of providing 
answers to meaningfully difficult questions may well be intractable 
given your methods, no matter how much hardware you have.



There
are two things to make the problem easier.  First, peers will have access to a
vast knowledge source that does not exist today.  Second, peers can specialize
in a narrow domain, e.g. only recognize one particular person in images, or
write software or do research in some obscure, specialized field.


You have clearly do no calculations whatsoever to establish that the 
demand for answers can be met by the supply of question-answerers.  Have 
you even calculated the number of recognizible images in the world, and 
the number of people available to be specialists for recognizing each 
one?  You could use up every single individual on the planet by putting 
them all on standby to answer questions involving the recognition

Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

When a computer processes a request like how many teaspoons in a cubic 
parsec? it can extract the meaning of the question by a relatively 
simple set of syntactic rules and question templates.


But when you ask it a question like how many dildos are there on the 
planet? [Try it] you find that google cannot answer this superficially 
similar question because it requires more intelligence in the 
question-analysis mechanism.


And just how would you expect your AGI to answer the question?  The first step
in research is to find out if someone else has already answered it.  It may
have been answered but Google can't find it because it only indexes a small
fraction of the internet.  It may also be that some dildo makers are privately
held and don't release sales figures.  In any case your AGI is either going to
output a number or I don't know, neither of which is more helpful than
Google.  If it does output a number, you are still going to want to know where
it came from.

But this discussion is tiresome.  I would not have expected you to anticipate
today's internet in 1978.  I suppose when the first search engine (Archie) was
released in 1990, you probably imagined that all search engines would require
you to know the name of the file you were looking for.

If you have a better plan for AGI, please let me know.


I do.  I did already.

You are welcome to ask questions about it at any time (see 
http://susaro.com/publications).


I will release more details about my plan as time goes on, and taking 
into account business pressures to keep some information proprietary.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] I'm just not sure how well this plan was thought through [WAS Re: Promoting an A.S.P.C,A.G.I.]

2008-04-10 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:

If you have a better plan for AGI, please let me know.

I do.  I did already.

You are welcome to ask questions about it at any time (see 
http://susaro.com/publications).


Question: which of these papers is actually a proposal for AGI?

I did also look at http://susaro.com/archives/category/general but there is no
design here either, just a list of unfounded assertions.  Perhaps you can
explain why you believe point #6 in particular to be true.


Perhaps you can explain why you described these as unfounded 
assertions when I clearly stated in the post that the arguments to back 
up this list will come later, and that this lst was intended just as a 
declaration?


It really is quite frustrating when you make accusations based on the 
fact that you stopped reading after a few paragraphs.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


About the Nine Misunderstandings post [WAS Re: [singularity] I'm just not sure how well...]

2008-04-10 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

I did also look at http://susaro.com/archives/category/general but there

is no

design here either, just a list of unfounded assertions.  Perhaps you can
explain why you believe point #6 in particular to be true.
Perhaps you can explain why you described these as unfounded 
assertions when I clearly stated in the post that the arguments to back 
up this list will come later, and that this lst was intended just as a 
declaration?


You say, The problem with this assumption is that there is not the slightest
reason why there should be more than one type of AI, or any competition
between individual AIs, or any evolution of their design.

Which is completely false.  There are many competing AI proposals right now. 
Why will this change?  I believe your argument is that the first AI to achieve

recursive self improvement will overwhelm all competition.  Why should it be
friendly when the only goal it needs to succeed is acquiring resources?


Because you have failed to look into this in enough depth to realize 
that you cannot build an AGI that will actually work, if its goal is to 
do nothing but acquire resources.


Your claim that [this] is completely false rests on assumptions like 
these.


My point, though, is that people like you make wild assumptions that you 
have not thought through, and then go around making irresponsible 
declarations that AGI *will* be like this or that, when in fact the 
assumptions on which you base these assertions are deeply flawed.


My list of nine misunderstandings was an attempt to redress the balance 
by giving what I believe to be a summary (NOTE:  it was JUST a summary, 
at this stage) of what the more accurate picture is like, when you start 
to make more accurate assumptions.


Now, I am sure that there will be elements of my (later) arguments that 
are challengeable, but at this stage I wanted to draw a line in the 
sand, and also make it clear to newcomers that there is at least one 
body of thought that says that everything being assumed right now is 
completely and utterly misleading.





We
already have examples of reproducing agents: Code Red, SQL Slammer, Storm,
etc. A worm that can write and debug code and discover new vulnerabilities
will be unstoppable.  Do you really think your AI will win the race when you
have the extra burden of making it safe?


Yes, because these reproducing agents you refer to are the most 
laughably small computer viruses that have no hope whatsoever of 
becoming generally intelligent.  At every turn, you completely 
undestimate what it means for a system to be intelligent.




Also, RSI is an experimental process, and therefore evolutionary.  We have
already gone through the information theoretic argument why this must be the
case.


No you have not:  I know of no information theoretic argument that 
even remotely applies to the type of system that is needed to achieve 
real intelligence.  Furthermore, the statement that RSI is an 
experimental process, and therefore evolutionary is just another 
example of you declaring something to be true when, in fact, it is 
loaded down with spurious assumptions.  Your statement is a complete 
non-sequiteur.





Richard Loosemore














---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Matt Mahoney wrote:

--- Mike Tintner [EMAIL PROTECTED] wrote:


My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.


It does work because the truth is popular.  Look at prediction markets.  Look
at Wikipedia.  It is well known that groups make better decisions as a whole
than the individuals in those groups (e.g. democracies vs. dictatorships). 
Combining knowledge from independent sources and testing their reliability is

a well known machine learning technique which I use in the PAQ data
compression series.  I understand the majority can sometimes be wrong, but the
truth eventually comes out in a marketplace that rewards truth.

Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html
or don't understand it.


Some of us have read it, and it has nothing whatsoever to do with 
Artificial Intelligence.  It is a labor-intensive search engine, nothing 
more.


I have no idea why you would call it an AI or an AGI.  It is not 
autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
intensive search engine there is no guarantee it would work, because 
the conflict resolution issues are all complexity-governed.


I am astonished that you would so blatantly call it something that it is 
not.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

Perhaps you have not read my proposal at

http://www.mattmahoney.net/agi.html

or don't understand it.
Some of us have read it, and it has nothing whatsoever to do with 
Artificial Intelligence.  It is a labor-intensive search engine, nothing 
more.


I have no idea why you would call it an AI or an AGI.  It is not 
autonomous, contains no thinking mechanisms, nothing.  Even as a alabor 
intensive search engine there is no guarantee it would work, because 
the conflict resolution issues are all complexity-governed.


I am astonished that you would so blatantly call it something that it is 
not.


It is not now.  I think it will be in 30 years.  If I was to describe the
Internet to you in 1978 I think you would scoff too.  We were supposed to have
flying cars and robotic butlers by now.  How could Google make $145 billion by
building an index of something that didn't even exist?

Just what do you want out of AGI?  Something that thinks like a person or
something that does what you ask it to?


Either will do:  your suggestion achieves neither.

If I ask your non-AGI the following question:  How can I build an AGI 
that can think at a speed that is 1000 times faster than the speed of 
human thought? it will say:


   Hi, my name is Ben and I just picked up your question.  I would
love to give you the answer but you have to send $20 million
and give me a few years.

That is not the answer I would expect of an AGI.  A real AGI would do 
original research to solve the problem, and solve it *itself*.


Isn't this, like, just too obvious for words?  ;-)



Richard Loosemore


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Derek Zahn wrote:

I asked:
  Imagine we have an AGI.  What exactly does it do?  What *should* it do?
 
Note that I think I roughly understand Matt's vision for this:  roughly, 
it is google, and it will gradually get better at answering questions 
and taking commands as more capable systems are linked in to the 
network.  When and whether it passes the AGI threshold is rather an 
arbitrary and unimportant issue, it just gets more capable of answering 
questions and taking orders.
 
I find that a very interesting and clear vision.  I'm wondering if there 
are others.


Surely not!

This line of argument looks like a new version of the same story that 
occurred in the very early days of science fiction.  People looked at 
the newly-forming telephone system and they thought that maybe if it 
just got big enough it might become .. intelligent.


Their reasoning was ... well, there wasn't any reasoning behind the 
idea.  It was just a mystical maybe lots of this will somehow add up to 
more than the sum of the parts, without any justification for why the 
whole should be more than the sum of the parts.


In exactly the same way, there is absolutely no reason to believe that 
Google will somehow reach a threshold and (magically) become 
intelligent.  Why would that happen?


If they deliberately set out to build an AGI somewhere, and then hook 
that up to google, that is a different matter entirely.  But that is not 
what is being suggested here.






Richard Loosemore.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore:

  I am not sure I understand.
 
  There is every reason to think that a currently-envisionable AGI would
  be millions of times smarter than all of humanity put together.
 
  Simply build a human-level AGI, then get it to bootstrap to a level of,
  say, a thousand times human speed (easy enough: we are not asking for
  better thinking processes, just faster implementation), then ask it to
  compact itself enough that we can afford to build and run a few billion
  of these systems in parallel
 
This viewpoint assumes that human intelligence is essentially trivial; I 
see no evidence for this and tend to assume that a properly-programmed 
gameboy is not going to pass the turing test.  I realize that people on 
this list tend to be more optimistic on this subject so I do accept your 
answer as one viewpoint.  It is surely a minority view, though, and my 
question only makes sense if you assume significant limitations in the 
capability of near-term hardware.


But if you want to make a meaningful statement about limitations, would 
it not be prudent to start from a clear understanding of how the size of 
the task can be measured, and how those measurements relate to the 
available resources?  If there is no information at all, we could not 
make a statement either way.


Without knowing how to bake a cake, or what the contents of your pantry 
are, I don't think you can state that We simply do not have what it 
takes to bake a cake in the near future.


I am only saying that I see no particular limitations, given the things 
that I know about how to buld an AGI.  That is the best I can do.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Nine Misunderstandings About AI

2008-04-08 Thread Richard Loosemore


I have just written a new blog post that is the begining of a daily 
series this week and next, when I will be launching a few broadsides 
against the orthodoxy and explaining where I am going with my work.


http://susaro.com/



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-07 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 6:55 PM, Ben Goertzel wrote:

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...



Like I stated at the beginning, *most* models are at least theoretically 
valid.  Of course, tractable engineering of said models is another 
issue. :-)  Engineering tractability in the context of computer science 
and software engineering is almost purely an applied mathematics effort 
to the extent there is any theory to it, and science has a very 
limited capacity to inform it.


If someone could describe, specifically, how to science is going to 
inform this process given the existing body of theoretical work, I would 
have no problem with the notion.  My objections were pragmatic.


Now hold on just a minute.

Yesterday you directed the following accusation at me:

 [Your assertion] Artificial Intelligence research does
 not have a credible science behind it ... [leads] me to
 believe that you either are ignorant of relevant literature
 (possible) or you do not understand all the relevant
 literature and simply assume it is not important.

You *vilified* the claim that I made, and implied that I could only say 
such a thing out of ignorance, so I challenged you to explain what 
exactly was the science behind artificial intelligence.


But instead of backing up your remarks, you make no response at all to 
the challenge, and then, in the comments to Ben above, you hint that you 
*agree* that there is no science behind AI (... science has a very 
limited capacity to inform it), it is just that you think there should 
not be, or does not need to be, any science behind it.


So let me summarize:

1)  I make a particular claim.

2)  You state that I can only say such a thing if I am ignorant.

3)  You refuse to provide any arguments against the claim.

4)  You then tacitly agree with the original claim.


Oh, and by the way, a small point of logic.  If someone makes a claim 
that There is no science behind artificial intelligence, this is a 
claim about the *nonexistence* of something, so you cannot demand that 
the person produce evidence to support the nonexistence claim.  The onus 
is entirely on you to provide evidence that there is a science behind 
AI, if you believe that there is, not on me to demonstrate that there is 
none.




Richard Loosemore





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20 
programmers in 3 to 10 years at a cost of under $10 million, then this 
represents such a paltry expense to some companies (Google for example) 
that it would seem to me that the thing to do is share the design with 
them and go for it (Google could RD this with no impact to their 
shareholders even if it fails). The potential of an AGI is so enormous 
that the cost (risk)/benefit ratio swamps anything Google (or others) 
could possibly be working on. If the concept behind Novamente is truly 
compelling enough it should be no problem to make a successful pitch.


Eric B. Ramsay


[WARNING!  Controversial comments.]


When you say If the concept behind Novamente is truly compelling 
enough, this is the point at which your suggestion hits a brick wall.


What could be compelling about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm theoretical 
basis, because there is no science that says this design should produce 
an intelligent machine because intelligence is KNOWN to be x and y and 
z, and this design unambiguously will produce something that satisfies x 
and y and z.


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  Before 
that, the best that any outside investor can do is use their gut 
instinct to decide whether they think that it will work.


Now, my own argument to investors is that the only situation in which we 
can do better than say My gut instinct says that my design will work 
is when we do actually base our work on a foundation that gives 
objective reasons for believing in it.  And the only situation that I 
know of that allows that kind of objective measure is by taking the 
design of a known intelligent system (the human cognitive system) and 
staying as close to it as possible.  That is precisely what I am trying 
to do, and I know of no other project that is trying to do that 
(including the neural emulation projects like Blue Brain, which are not 
pitched at the cognitive level and therefore have many handicaps).


I have other, much more compelling reasons for staying close to human 
cognition (namely the complex systems problem and the problem of 
guaranteeing friendliness), but this objective-validation factor is one 
of the most important.


My pleas that more people do what I am doing fall on deaf ears, 
unfortunately, because the AI community is heavily biassed against the 
messy empiricism of psychology.  Interesting situation:  the personal 
psychology of AI researchers may be what is keeping the field in Dead 
Stop mode.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be compelling about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm 
theoretical basis, because there is no science that says this design 
should produce an intelligent machine because intelligence is KNOWN to 
be x and y and z, and this design unambiguously will produce something 
that satisfies x and y and z.


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  
Before that, the best that any outside investor can do is use their 
gut instinct to decide whether they think that it will work.



Even if every single AGI design in existence is fundamentally broken 
(and I would argue that a fair amount of AGI design is theoretically 
correct and merely unavoidably intractable), this is a false 
characterization.  And at a minimum, it should be no mathematics 
rather than no science.


Mathematical proof of validity of a new technology is largely 
superfluous with respect to whether or not a venture gets funded.  
Investors are not mathematicians, at least not in the sense that 
mathematical certainty of the correctness of the model would be 
compelling.  If they trust the person enough to invest in them, they 
will generally trust that the esoteric mathematics behind the venture 
are correct as well.  No one tries to actually understand the 
mathematics even if though they will give them a cursory glance -- that 
is your job.



Having had to sell breakthroughs in theoretical computer science before 
(unrelated to AGI), I would make the observation that investors in 
speculative technology do not really put much weight on what you know 
about the technology.  After all, who are they going to ask if you are 
the presumptive leading authority in that field? They will verify that 
the current limitations you claim to be addressing exist and will want 
concise qualitative answers as to how these are being addressed that 
comport with their model of reality, but no one is going to dig through 
the mathematics and derive the result for themselves.  Or at least, I am 
not familiar with cases that worked differently than this.  The real 
problem is that most AGI designers cannot answer these basic questions 
in a satisfactory manner, which may or may not reflect what they know.


You are addressing (interesting and valid) issues that lie well above 
the level at which I was making my argument, so unfortnately they miss 
the point.


I was arguing that whenever a project claims to be doing engineering 
there is always a background reference that is some kind of science or 
mathematics or prescription that justifies what the project is trying to 
achieve:


1)  Want to build a system to manage the baggage handling in a large 
airport?  Background prescription = a set of requirements that the flow 
of baggage should satisfy.


2)  Want to build an aircraft wing? Background science =  the physics of 
air flow first, along with specific criteria that must be satisfied.


3)  Want to send people on an optimal trip around a set of cities? 
Background mathematics = a precise statement of the travelling salesman 
problem.


No matter how many other cases you care to list, there is always some 
credible science or mathematics or common sense prescription lying at 
the back of the engineering project.


Here, for contrast, is an example of an engineering project behind which 
there was NO credible science or mathematics or prescription:


4*)  Find an alchemical process that will lead to the philosophers' stone.

Alchemists knew what they wanted - kind of - but there was no credible 
science behind what they did.  They were just hacking.


Artificial Intelligence research does not have a credible science behind 
it.  There is no clear definition of what intelligence is, there is only 
the living example of the human mind that tells us that some things are 
intelligent.


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to an 
agreement that intelligence is X, and so, starting from that position we 
are able to do some engineering to build a system that satisfies the 
criteria inherent in X, so we can build an intellgence.


Instead what we have are AI researchers who have gut instincts about 
what intelligence is, and from that gut instinct they proceed to hack.


They are, in short, alchemists.

And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook, and claim that the Rational Agents framework plus 
logical reasoning is the scientific framework on which an idealized 
intelligent system can be designed, I should point out that this concept 
is completely rejected by most cognitive psychologists:  they point out 
that the intelligence to be found in the only example of an 
intelligent

Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science 
behind it.  There is no clear definition of what intelligence is, 
there is only the living example of the human mind that tells us that 
some things are intelligent.



The fact that the vast majority of AGI theory is pulled out of /dev/ass 
notwithstanding, your above characterization would appear to reflect 
your limitations which you have chosen to project onto the broader field 
of AGI research.  Just because most AI researchers are misguided fools 
and you do not fully understand all the relevant theory does not imply 
that this is a universal (even if it were).


Ad hominem.  Shameful.


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to 
an agreement that intelligence is X, and so, starting from that 
position we are able to do some engineering to build a system that 
satisfies the criteria inherent in X, so we can build an intellgence.



I do not need anyone's agreement to prove that system Y will have 
property X, nor do I have to accommodate pet theories to do so.  AGI is 
mathematics, not science.


AGI *is* mathematics?

Oh dear.

I'm sorry, but if you can make a statement such as this, and if you are 
already starting to reply to points of debate by resorting to ad 
hominems, then it would be a waste of my time to engage.


I will just note that if this point of view is at all widespread - if 
there really are large numbers of people who agree that AGI is 
mathematics, not science  -  then this is a perfect illustration of 
just why no progress is being made in the field.



Richard Loosemore


Plenty of people can agree on what X is and 
are satisfied with the rigor of whatever derivations were required.  
There are even multiple X out there depending on the criteria you are 
looking to satisfy -- the label of AI is immaterial.


What seems to have escaped you is that there is nothing about an 
agreement on X that prescribes a real-world engineering design.  We have 
many examples of tightly defined Xs in theory that took many decades of 
RD to reduce to practice or which in some cases have never been reduced 
to real-world practice even though we can very strictly characterize 
them in the mathematical abstract.  There are many AI researchers who 
could be accurately described as having no rigorous framework or 
foundation for their implementation work, but conflating this group with 
those stuck solving the implementation theory problems of a 
well-specified X is a category error.


There are two unrelated difficult problems in AGI: choosing a rigorous X 
with satisfactory theoretical properties and designing a real-world 
system implementation that expresses X with satisfactory properties.  
There was a time when most credible AGI research was stuck working on 
the former, but today an argument could be made that most credible AGI 
research is stuck working on the latter.  I would question the 
credibility of opinions offered by people who cannot discern the 
difference.



And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook...



I'm not interested in lame classical AI, so this is essentially a 
strawman.  To the extent I am personally in a theory camp, I have been 
in the broader algorithmic information theory camp since before it was 
on anyone's radar.



It is not that these investors understand the abstract ideas I just 
described, it is that they have a gut feel for the rate of progress 
and the signs of progress and the type of talk that they should be 
encountering if AGI had mature science behind it.  Instead, what they 
get is a feeling from AGI researchers that each one is doing the 
following:


1)  Resorting to a bottom line that amounts to I have a really good 
personal feeling that my project really will get there, and


2)  Examples of progress that look like an attempt to dress a doughnut 
up as a wedding cake.



Sure, but what does this have to do with the topic at hand?  The problem 
is that investors lack any ability to discern a doughnut from a wedding 
cake.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Ben Goertzel wrote:

Funny dispute ... is AGI about mathematics or science

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...


Actually, the discussion had nothing to do with the rather bizarre 
interpretation you put on it above.




Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] future search

2008-04-02 Thread Richard Loosemore

David Hart wrote:

Hi All,

I'm quite worried about Google's new /Machine Automated Temporal 
Extrapolation/ technology going FOOM!


http://www.google.com.au/intl/en/gday/

-dave


Too right mate:  I read about this yesterday, and it told me that you 
would send this post today...



:-)


Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Richard Loosemore

[EMAIL PROTECTED] wrote:
You have to be careful with the phrase 'Manhattan-style project'.  


You are right.

On previous occasions when this subject has come up I, at least, have 
referred to the idea as an Apollo Project, not a Manhattan Project.




Richard Loosemore





 That was a military project with military aims, and a 'benevolent' 
dictator mgmt structure.  No input for researchers concerning things 
like applicability of the project output, delivery systems, timeframes, 
social issues, nothing.   Compartmentalization, not open overview, would 
be the general tenor.   Similarly, with a consortium, you have the 
necessary economic incentive struggles and tensions.   Only real chance 
would be the lone wolf, in my opinion, more like what you might call the 
Tesla-model.


Not that I really think AGI is something possible or desirable.

~Robert S.
-- Original message from Eric B. Ramsay [EMAIL PROTECTED]: -- 


It took Microsoft over 1000 engineers, $6 Billion and several years to make 
Vista.  Will building an AGI be any less formidable? If the AGI effort is 
comparable, how can the relatively small efforts of Ben (comparatively 
speaking) and others possibly succeed? If the effort to build an AGI is not 
comparable, why not? Perhaps a consortium (non-governmental) should be created 
specifically for the building of an AGI. Ben talks about a Manhattan style 
project. A consortium could pool all resources currently available (people and 
hardware), actively seek private funds on a  continuing basis and give 
coherence to the effort.

Eric B. Ramsay




singularity | Archives  | Modify Your Subscription

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is there really a bit per synapse? Is representing a synapse with a bit an
accurate enough simulation? One synapse is a very complicated system.


A typical neural network simulation uses several bits per synapse.  A Hopfield
net implementation of an associative memory stores 0.15 bits per synapse.  But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9 bits).


Sorry, I don't buy this at all.  This makes profound assumptions about 
how information is stored in memory, averagng out the net storage and 
ignoring the immediate storage capacity.  A typical synapse actually 
stores a great deal more than a fraction of a bit, as far as we can 
tell, but this information is stored in such a way that the system as a 
whole can actually use the information in a meaningful way.


In that context, quoting 0.01 bits per synapse is a completely 
meaningless statement.


Also, typical neural network simulations use more than a few bits as 
well.  When I did a number of backprop NN studies in the early 90s, my 
networks had to use floating point numbers because the behavior of the 
net deteriorated badly if the numerical precision was reduced.  This was 
especially important on long training runs or large datasets.





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is there really a bit per synapse? Is representing a synapse with a bit

an

accurate enough simulation? One synapse is a very complicated system.

A typical neural network simulation uses several bits per synapse.  A

Hopfield
net implementation of an associative memory stores 0.15 bits per synapse. 

But
cognitive models suggest the human brain stores .01 bits per synapse. 
(There are 10^15 synapses but human long term memory capacity is 10^9

bits).

Sorry, I don't buy this at all.  This makes profound assumptions about 
how information is stored in memory, averagng out the net storage and 
ignoring the immediate storage capacity.  A typical synapse actually 
stores a great deal more than a fraction of a bit, as far as we can 
tell, but this information is stored in such a way that the system as a 
whole can actually use the information in a meaningful way.


In that context, quoting 0.01 bits per synapse is a completely 
meaningless statement.


I was referring to Landauer's estimate of long term memory learning rate of
about 2 bits per second.  http://www.merkle.com/humanMemory.html
This does not include procedural memory, things like visual perception and
knowing how to walk.  So 10^-6 bits is low.  But how do we measure such
things?


I think my general point is that bits per second or bits per synapse 
is a valid measure if you care about something like an electrical signal 
line, but is just simply an incoherent way to talk about the memory 
capacity of the human brain.


Saying 0.01 bits per synapse is no better than opening and closing 
one's mouth without saying anything.




Richard Loosemore.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 20/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


I am aware of some of those other sources for the idea:  nevertheless,
they are all nonsense for the same reason.  I especially single out
Searle:  his writings on this subject are virtually worthless.  I have
argued with Searle to his face, and I have talked with others
(Hofstadter, for example) who have also done so, and the consensus among
these people is that his arguments are built on confusion.


Just to be clear, this is *not* the same as Searle's Chinese Room
argument, which only he seems to find convincing.


Oh, my word:  if only it was just him!

He was at the Tucson Consciousness conference two years ago, and in his 
big talk he strutted about the stage saying I invented the Chinese Room 
thought experiment, and the Computationalists tried to explain it away 
for twenty years until finally the dust settled, and now finally they 
have given up and everyone agrees that I WON!


This statement was followed by tumultuous applause and cheers from a 
large fraction of the 800+ audience.


You're right that it is not the same as the Chinese Room, but if I am 
not mistaken this was one of his attempts to demolish a reply to the 
Chinese Room.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore

John Ku wrote:


By the way, I think this whole tangent was actually started by Richard
misinterpreting Lanier's argument (though quite understandably given
Lanier's vagueness and unclarity). Lanier was not imagining the
amazing coincidence of a genuine computer being implemented in a
rainstorm, i.e. one that is robustly implementing all the right causal
laws and the strong conditionals Chalmers talks about. Rather, he was
imagining the more ordinary and really not very amazing coincidence of
a rainstorm bearing a certain superficial isomorphism to just a trace
of the right kind of computation. He rightly notes that if
functionalism were committed to such a rainstorm being conscious, it
should be rejected. I think this is true whether or not such
rainstorms actually exist or are likely since a correct theory of our
concepts should deliver the right results as the concept is applied to
any genuine possibility. For instance, if someone's ethical theory
delivers the result that it is perfectly permissible to press a button
that would cause all conscious beings to suffer for all eternity, then
it is no legitimate defense to claim that's okay because it's really
unlikely. As I tried to explain, I think Lanier's argument fails
because he doesn't establish that functionalism is committed to the
absurd result that the rainstorms he discusses are conscious or
genuinely implementing computation. If, on the other hand, Lanier were
imagining a rainstorm miraculously implementing real computation (in
the way Chalmers discusses) and somehow thought that was a problem for
functionalism, then of course Richard's reply would roughly be the
correct one.


Oh, I really don't think I made that kind of mistake in interpreting 
Lanier's argument.


If Lanier was attacking a very *particular* brand of functionalism (the 
kind that would say isomorphism is everything, so any isomorphism 
between a rainstorm and a conscious computer, even for just a 
millisecond, would leave you no option but to say that the rainstorm is 
conscious), then perhaps I agree with Lanier.  That kind of simplistic 
functionalism is just not going to work.


But I don't think he was narrowing his scope that much, was he?  If so, 
he was attacking a straw man.  I just assumed he wasn't doing anything 
so trivial, but I stand to be corrected if he was.  I certainly thought 
that may of the people who cited Lanier's argument were citing it as a 
demolition of functionalism in the large.


There are many functionalists who would say that what matters is a 
functional isomorphism, and that even though we have difficulty at this 
time saying exactly what we mean by a functional isomorphism, 
nevertheless it is not good enough to simply find any old isomorphism 
(especially one which holds for only a moment).


I would also point out one other weakness in his argument:  in order to 
get his isomorphism to work, he almost certainly has to allow the 
hypothetical computer to implement the rainstorm at a different level 
of representation from the consciousness.  It is only if you allow 
this difference of levels between the two things that the hypothetical 
machine is guaranteed to be possible.  If the two things are suposed to 
be present at exactly the same level of representation in the machine, 
then I am fairly sure that the machine is over-constrained and thus we 
cannot say that such a machine is, in general possible.


But if they happen at different levels, then the argument falls appart 
for a different reason:  you can always make two systems coexist in this 
way, but that does not mean that they are the same system.  There is 
no actual isomorphism in this case.  This, of course, was Searle's main 
mistake:  understanding of English and Chinese were happening in two 
different levels, therefore two different systems, and nobody claims 
that what one system understands, the other must also be 
understanding.  (Searle's main folly, of course, is that he has never 
shown any sign of being able to understand this point).




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-19 Thread Richard Loosemore

John K Clark wrote:

And I will define consciousness just as soon as you define define.


Ah, but that is exactly my approach.

Thus, the subtitle I gave to my 2006 conference paper was Explaining 
Consciousness by Explaining That You Cannot Explain it, Because Your 
Explanation Mechanism is Getting Zapped.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


Sorry, but I do not think your conclusion even remotely follows from the
premises.

But beyond that, the basic reason that this line of argument is
nonsensical is that Lanier's thought experiment was rigged in such a way
that a coincidence was engineered into existence.

Nothing whatever can be deduced from an argument in which you set things
up so that a coincidence must happen!  It is just a meaningless
coincidence that a computer can in theory be set up to be (a) conscious
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.


I am aware of some of those other sources for the idea:  nevertheless, 
they are all nonsense for the same reason.  I especially single out 
Searle:  his writings on this subject are virtually worthless.  I have 
argued with Searle to his face, and I have talked with others 
(Hofstadter, for example) who have also done so, and the consensus among 
these people is that his arguments are built on confusion.


(And besides, I don't stop thinking just because others have expressed 
their view of an idea:  I use my own mind, and if I can come up with an 
argument against the idea, I prefer to use that rather than defer to 
authority. ;-) )


But going back to the question at issue:  this coincidence is a 
coincidence that happens in a thought experiment. If someone constructs 
a thought experiment in which they allow such things as computers of 
quasi-infinite size, they can make anything happen, including ridiculous 
coincidences!


If you set the thought experiment up so that there is enough room for a 
meaningless coincidence to occur within the thought experiment, then 
what you have is *still* just a meaningless coincidence.


I don't think I can put it any plainer than that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
[snip]

But again, none of this touches upon Lanier's attempt to draw a bogus
conclusion from his thought experiment.



No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.

This makes little sense, surely.  You mean that we would not be able to
interact with it?  Of course not:  the poor thing will have been
isolated from meanigful contact with the world because of the jumbled up
implementation that you posit.  Again, though, I see no relevant
conclusion emerging from this.

I cannot make any sense of your statement that as far as the rest of
the universe is concerned there may as well be no computation.  So we
cannot communicate with it anymore that should not be surprising,
given your assumptions.


We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



Sorry, but I do not think your conclusion even remotely follows from the 
premises.


But beyond that, the basic reason that this line of argument is 
nonsensical is that Lanier's thought experiment was rigged in such a way 
that a coincidence was engineered into existence.


Nothing whatever can be deduced from an argument in which you set things 
up so that a coincidence must happen!  It is just a meaningless 
coincidence that a computer can in theory be set up to be (a) conscious 
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


It is as simple as that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-18 Thread Richard Loosemore

John K Clark wrote:

Matt Mahoney [EMAIL PROTECTED]


It seems to me the problem is
defining consciousness, not testing for it.


And it seems to me that beliefs of this sort are exactly the reason 
philosophy is in such a muddle. A definition of consciousness is not
needed, in fact unless you're a mathematician where they can be of some 
use, one can lead a full rich rewarding intellectually life without

having a good definition of anything. Compared with examples
definitions are of trivial importance.


On the contrary, in this case I have argued that it is exactly the lack 
of a clear definition of what consciousness is supposed to be, that 
causes so much of the problem of trying to explaining it.


Further, I have suggested that the C problem can be solved once we 
understand *why* we have so much trouble saying what it is.  I have 
given an explicit, complete explanation for what consciousness is, which 
starts out from a resolution of the definition-difficulty.


I note that Nick Humphrey has recently started to say something very 
similar.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


The first problem arises from Lanier's trick of claiming that there is a
computer, in the universe of all possible computers, that has a machine
architecture and a machine state that is isomorphic to BOTH the neural
state of a brain at a given moment, and also isomorphic to the state of
a particular rainstorm at a particular moment.


In the universe of all possible computers and programs, yes.


This is starting to be rather silly because the rainstorm and computer
then diverge in their behavior in the next tick of the clock. Lanier
then tries to persuade us, with some casually well chosen words, that he
can find a computer that will match up with the rainstorm AND the brain
for a few seconds, or a few minutes ... or ... how long?  Well, if he
posits a large enough computer, maybe the whole lifetime of that brain?

The problem with this is that what his argument really tells us is that
he can imagine a quasi-infinitely large, hypothetical computer that just
happens to be structured to look like (a) the functional equivalent of a
particular human brain for an indefinitely long period of time (at least
the normal lifetime of that human brain), and, coincidentally, a
particular rainstorm, for just a few seconds or minutes of the life of
that rainstorm.

The key word is coincidentally.


There is no reason why it has to be *the same* computer from moment to
moment. If your mind were uploaded to a computer and your physical
brain died, you would experience continuity of consciousness (or if
you prefer, the illusion of continuity of consciousness, which is just
as good) despite the fact that there is a gross physical discontinuity
between your brain and the computer. You would experience continuity
of consciousness even if every moment were implemented on a completely
different machine, in a completely different part of the universe,
running in a completely jumbled up order.


Some of this I agree with, though it does not touch on the point that I 
was making, which was that Lanier's argument was valueless.


The last statement you make, though, is not quite correct:  with a 
jumbled up sequence of episodes during which the various machines were 
running the brain code, he whole would lose its coherence, because input 
from the world would now be randomised.


If the computer was being fed input from a virtual reality simulation, 
that would be fine.  It would sense a sudden change from real world to 
virtual world.


But again, none of this touches upon Lanier's attempt to draw a bogus 
conclusion from his thought experiment.




No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.


This makes little sense, surely.  You mean that we would not be able to 
interact with it?  Of course not:  the poor thing will have been 
isolated from meanigful contact with the world because of the jumbled up 
implementation that you posit.  Again, though, I see no relevant 
conclusion emerging from this.


I cannot make any sense of your statement that as far as the rest of 
the universe is concerned there may as well be no computation.  So we 
cannot communicate with it anymore that should not be surprising, 
given your assumptions.



But if the computation
involves conscious observers in a virtual reality, why should they be
any less conscious due to being unable to observe and interact with
the substrate of their implementation?


No reason at all!  They would be conscious.  Isaac Newton could not 
observe and interact with the substrate of his implementation, without 
making a hole in his skull that would have killed his brain ... but that 
did not have any bearing on his consciousness.



In the final extrapolation of this idea it becomes clear that if any
computation can be mapped onto any physical system, the physical
system is superfluous and the computation resides in the mapping, an
abstract mathematical object.


This is functionalism, no?  I am not sure if you are disagreeing with 
functionalism or supporting it.  ;-)


Well, the computation is not the implemenatation, for sure, but is it 
appropriate to call it an abstract mathematical mapping?



This leads to the idea that all
computations are actually implemented in a Platonic reality, and the
universe we observe emerges from that Platonic reality, as per eg. Max
Tegmark and in the article linked to by Matt Mahoney:


I don't see how this big jump follows.  I have a different 
interpretation that does not need Platonic realities, so it looks like 
a non-sequiteur to me.




http://www.mattmahoney.net/singularity.html


I ind most of what Matt says in this article to be incoherent. 
Assertions pulled out of thin air and citing of unjustifiable claims 
made by others as if they were god-sent truth.



Richard Loosemore

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
When people like Lanier allow themselves the luxury of positing 
infinitely large computers (who else do we know who does this?  Ah, yes, 
the AIXI folks), they can make infinitely unlikely coincidences happen.


It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.


So?  That was not the practice that I condemned.

My problem is with people like Hutter or Lanier using thought 
experiments in which the behavior of quasi-infinite computers is treated 
as if it were a meaningful thing in the real universe.


There is a world of difference between that and using Turing machines in 
proofs.




Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.


He is doing nothing of the sort.  As I stated in the quote above, he is 
drawing a meaningless conclusion by introducing a quasi-infinite 
computation into his proof:  when people try to make claims about the 
real world (i.e. claims about what artificial intelligence is) by 
postulating machines with quasi-infinite amounts of computation going on 
inside them, they can get anything to happen.



Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


And you missed what I said about Lanier, apparently.

He refuted nothing.  He showed that with a quasi-infinite computer in 
his thought experiment, he can make a coincidence happen.


Big deal.



Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread Richard Loosemore

Eric B. Ramsay wrote:
I don't know when Lanier wrote the following but I would be interested 
to know what the AI folks here think about his critique (or direct me to 
a thread where this was already discussed). Also would someone be able 
to re-state his rainstorm thought experiment more clearly -- I am not 
sure I get it:


 http://www.jaronlanier.com/aichapter.html


Lanier's rainstorm argument is spurious nonsense.

It relies on a sleight of hand, and preys on the inability of most 
people to notice the point at which he slips from valid-analogy to 
nonsense-analogy.


He also then goes on to use a debating trick that John Searle is fond 
of:  he claims that the people who disagree with his argument always 
choose a different type of counter-argument.  His implication is that, 
because the follow different paths, therefore they don't agree about 
what is wrong, therefore ALL of them are fools, and therefore NONE of 
their counter-arguments are valid.


Really.  I like Jaron Lanier as a musician, but this is drivel.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] MindForth achieves True AI functionality

2008-02-02 Thread Richard Loosemore

Eric B. Ramsay wrote:
I noticed that the members of the list have completely ignored this 
pronouncement by A.T. Murray. Is there a reason for this (for example is 
this person considered fringe or worse)?


Being as generous as I can to Arthur (as I did recently on the AGI list) 
his pronouncements about Mentifex may be sincere, but his estimates of 
its capabilities are somewhat ... exaggerated.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=93038966-0d8c10


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-28 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

Matt Mahoney wrote:

Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your

human

lifetime.  Would it make much difference if it was erased to make room for
something more important?
This question is not coherent, as far as I can see.  My total memory? 
  Important to whom?  Under what assumptions do you suggest this situation.


I mean the uploaded you with the computing power of 10^19 brains (to pick a
number).  When you upload there are two you, the original human and the copy. 
Both copies are you in the sense that both behave as though conscious and both

have your (original) memories.  I use the term you for the upload in this
sense, although it is really everybody.


So you are referring to a *combination* upload, not several billion 
separate uploads that maintain their independence after uploading.  An 
AGI with the combined memories of all the people on the planet is one 
single entity, unless it splits itself up.




By conscious behavior, I mean belief that sensory input is the result of a
real environment and belief in having some control over it.  This is different
than the common meaning of consciousness which we normally associate with
human form or human behavior.  By believe I mean claiming that something is
true, and behaving in a way that would increase reward if it is true.  I don't
claim that consciousness exists.

My assumption is friendly AI under the CEV model.  Currently, FAI is unsolved.
 CEV only defines the problem of friendliness, not a solution.  As I
understand it, CEV defines AI as friendly if on average it gives humans what
they want in the long run, i.e. denies requests that it predicts we would
later regret.  If AI has superhuman intelligence, then it could model human
brains and make such predictions more accurately than we could ourselves.  The
unsolved step is to actually motivate the AI to grant us what it knows we
would want.  The problem is analogous to human treatment of pets.  We know
what is best for them (e.g. vaccinations they don't want), but it is not
possible for animals to motivate us to give it to them.


This paragraph assumes that humans and AGIs will be completely separate, 
which I have already explained is an extremely unlikely scenario.


Why do you persist in ignoring that fact?

I cannot address any of the other issues you raise unless that one 
outrageously implausible assumption is removed from the argument.



Richard Loosemore




FAI under CEV would not be applicable to uploaded humans with collective
memories because the AI could not predict what an equal or greater
intelligence would want.  For the same reason, it may not apply to augmented
human brains, i.e. brains extended with additional memory and processing
power.

My question to you, the upload with the computing power of 10^19 brains, is
whether the collective memory of the 10^10 humans alive at the time of the
singularity is important.  Suppose that this memory (say 10^25 bits out of
10^34 available bits) could be lossily compressed into a program that
simulated the rise of human civilization on an Earth similar to ours, but with
different people.  This compression would make space available to run many
such simulations.

So when I ask you (the upload with 10^19 brains) which decision you would
make, I realize you (the original) are trying to guess the motivations of an
AI that knows 10^19 times more.  We need some additional assumptions:

1. You (the upload) are a friendly AI as defined by CEV.
2. All humans have been uploaded because as a FAI you predicted that humans
would want their memories preserved, and no harm to the original humans is
done in the process.
3. You want to be smarter (i.e. more processing speed, memory, I/O bandwidth,
and knowledge), because this goal is stable under RSI.
4. You cannot reprogram your own goals, because systems that could are not
viable.
5. It is possible to simulate intermediate level agents with memories of one
or more uploaded humans, but less powerful than yourself.  FAI applies to
these agents.
6. You are free to reprogram the goals and memories of humans (uploaded or
not) and agents less powerful than yourself, consistent with what you predict
they would want in the future.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=58446206-353da4


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-26 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

Why do say that Our reign will end in a few decades when, in fact, one 
of the most obvious things that would happen in this future is that 
humans will be able to *choose* what intelligence level to be 
experiencing, on a day to day basis?  Similarly, the AGIs would be able 
to choose to come down and experience human-level intelligence whenever 
they liked, too.


Let's say that is true.  (I really have no disagreement here).  Suppose that
at the time of the singularity that the memories of all 10^10 humans alive at
the time, you included, are nondestructively uploaded.  Suppose that this
database is shared by all the AGI's.  Now is there really more than one AGI? 
Are you (the upload) still you?


Well, first point is that we all get to choose whether or not this 
upload happens:  I don't particularly want to duplicate myself in this 
way, and I think many others would also be cautious, so your scenario is 
less than likely.  I do not have the slightest desire to become nothing 
but a merged copy of myself within a larger entity, and I don't think 
many other people would want to be nothing but that, so this merge (if 
it happened at all) would just take place in parallel with everything else.


But if they did, and it was implemented exactly as you describe, then 
all 10^10 minds would be merged (is that what you were meaning) and that 
merged mind would be a single individual with rather a lot of baggage.


There would also be 10^10 humans carrying on as normal (per your 
description of the scenario) and I cannot see any reason to call them 
anything other than themselves.




Does it now matter if humans in biological form still exist?  You have
preserved everyone's memory and DNA, and you have the technology to
reconstruct any person from this information any time you want.


What counts is the number of individuals, whatever form them transmute 
themselves into.  I suspect that most people would want to stay 
individual.  Whether they use human form or not, I don't know, but I 
suspect that the human form will remain a baseline, with people taking 
trips out to other forms, for leisure.  Whether that remains so forever, 
I cannot say, but you are implying here that (for some reasons that is 
completely obscure to me) there would be some pressure to upload 
everyone's minds to a central databank and then wipe out the originals 
and reconstruct them occasionally.  There would be no pressure for 
people to do that, so why would it happen?


So when you say does it matter if humans in biological form still 
exist? I say:  it will matter to those humans, probably, so they will 
still exist.



Suppose that the collective memories of all the humans make up only one
billionth of your total memory, like one second of memory out of your human
lifetime.  Would it make much difference if it was erased to make room for
something more important?


This question is not coherent, as far as I can see.  My total memory? 
 Important to whom?  Under what assumptions do you suggest this situation.


You seem to be presenting me with a scenario out of the blue, talking as 
if it were in some sense an inevitability (which it clearly is not) and 
then asking me to comment on it.  That's a loaded question, surely?



I am not saying that the extinction of humans and its replacement with godlike
intelligence is necessarily a bad thing, but it is something to be aware of.


Nothing in what I have said implies that there would be any such thing 
as an extinction of humans and its replacement with godlike 
intelligence, and you have not tried to establish that this is 
inevitable or likely, so the issue strikes me as pointless.  We might as 
well discuss the pros and cons of all the humans becoming fans of only 
one baseball team, for all eternity, and the terrible pain this would 
cause to all the other teams . :-)




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57826788-6dce1e


Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore

candice schuster wrote:

Richard,
 
Your responses to me seem to go in round abouts.  No insult intended 
however.
 
You say the AI will in fact reach full consciousness.  How on earth 
would that ever be possible ? 


I think I recently (last week or so) wrote out a reply to someone on the 
question of what a good explanation of consciousness might be (was it 
on this list?).  I was implictly referring to that explanation of 
consciousness.  It makes the definite prediction that consciousness 
(subjective awareness, qualia, etc.  what Chalmer's called the Hard 
Problem of consciousness) is a direct result of an intelligent system 
being built with a sufficient level of complexity and self-reflection.


Make no mistake:  the argument is long and tangled (I will write it up a 
length when I can) so I do not pretend to be trying to convince you of 
its validity here.  All I am trying to do at this point is to state that 
THAT is my current understanding of what would happen.


Let me rephrase that:  we (a subset of the AI community) believe that we 
have discovered concrete reasons to predict that a certain type of 
organization in an intelligent system produces consciousness.


This is not meant to be one of those claims that can be summarized in a 
quick analogy, or quick demonstration, so there is no way for me to 
convince you quickly, all I can say is that we have very string reasons 
to believe that it emerges.



You mentioned in previous posts that the AI would only be programmed 
with 'Nice feelings' and would only ever want to serve the good of 
mankind ?  If the AI has it's own ability to think etc, what is stopping 
it from developing negative thoughtsthe word 'feeling' in itself 
conjures up both good and bad.  For instance...I am an AI...I've 
witnessed an act of injustice, seeing as I can feel and have 
consciousness my consciousness makes me feel Sad / Angry ?


Again, I have talked about this a few times before (cannot remember the 
most recent discussion) but basically there are two parts to the mind: 
the thinking part and the motivational part.  If the AGI has a 
motivational that feels driven by empathy for humans, and if it does not 
possess any of the negative motivations that plague people, then it 
would not react in a negative (violent, vengeful, resentful etc) way.


Did I not talk about that in my reply to you?  How there is a difference 
between having consciousness and feeling motivations?  Two completely 
separate mechanisms/explanations?




Hold on...that would not be possible seeing as my owner has an 'Off' 
button he can push to avoid me feeling that way and hay I have only been 
programmed with 'Nice feelings' even though my AI Creators have told the 
rest of the world I have a full working conscious.  It's starting to 
sound a bit like me presenting myself to the world after my 
'Hippocampus' has been removed or better yet I've had a full frontal 
labotomy'.


[Full Frontal Labotomy?  :-)  You mean pre-frontal lobotomy, maybe. 
Either that or this is a great title for a movie about a lab full of 
scientists trapped in a nudist colony].


Not at all like that.  Did you ever have a good day, when you were so 
relaxed that nothing could disturb your feelings of generosity to the 
world?  Imagine a creature that genuinely felt like that, and simple 
never could have a bad day.


But to answer you more fully:  all of this depends on exactly how the 
motivation system of humans and AGIs is designed.  We can only really 
have that discussion in the context of a detailed knowledge of 
specifics, surely?



And you say the AI will have thoughts and feelings about the world 
around it ?  I shudder to think what a newly born, pure AI had to think 
about the world around us now.  Or is that your ultimate goal in this 
Utopia that you see Richard ?  That the AI's will become like Spiritual 
Masters to us and make everything 'all better' so to speak by creating 
little 'ME' worlds for us very confused, 'life purpose' seeking people ?


No, they will just solve enough of the boring problems that we can enjoy 
the rest of life.


Please also note the ideas in my parallel discussion with Matt Mahoney: 
 do not be tempted to think of a Them and Us situation:  we would have 
the ability to become just as knowledgeable as they are, at any time. 
We could choose our level of understanding on a day to day basis, the 
way we now choose our clothes.  Same goes for them.


We would not be two species.  Not master and servant.  Just one species 
with more options than before.


[I can see I am going to have to write this out in more detail, just to 
avoid the confusion caused by brief glimpses of the larger picture].




Richard Loosemore



Candice
 



  Date: Thu, 25 Oct 2007 19:02:35 -0400
  From: [EMAIL PROTECTED]
  To: singularity@v2.listbox.com
  Subject: Re: [singularity] John Searle...
 
  candice schuster wrote:
   Richard,
  
   Thank you for a thought provoking response

Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore

Benjamin Goertzel wrote:


On 10/26/07, Stefan Pernar  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
 
  My one sentence summary of CEV is: What would a better
me/humanity want?
  Is that in line with your understanding? 



No...

I'm not sure I fully grok Eliezer's intentions/ideas, but I will 
summarize here the

current idea I have of CEV.. which is quite different than yours, and not as
oversimplified.

My understanding is that it's more like this (taking some liberties)

X0 = me
Y0 = what X0 thinks is good for the world
X1 = what X0 wants to be
Y1 = what X1 would think is good for the world
X2 = what X1 would want to be
Y2 = what X2 would think is good for the world.
...

The only circularity here is in the sense of convergence-to-a-fixed-point,
i.e. the series may tend gradually (or quickly) toward

X = what X wants to be
Y = what X thinks is good for the world

In fact some people may already be exactly at this fixed point, i.e.
they may be exactly what they want to be already...  For them,
we'd have X1=X0 and Y1=Y0

You can sensibly argue that CEV is poorly-defined, or that it's
not likely to converge to anything given a normal human as an initial
condition, or that it is likely to converge to totally different things
for different sorts of people, giving no clear message...

But I don't think you can argue it's circular...


Stefan can correct me if I am wrong here, but I think that both yourself 
and Aleksei have misunderstood the sense in which he is pointing to a 
circularity.


If you build an AGI, and it sets out to discover the convergent desires 
(the CEV) of all humanity, it will be doing this because it has the goal 
of using this CEV as the basis for the friendly motivations that will 
henceforth guide it.


But WHY would it be collecting the CEV of humanity in the first phase of 
the operation?  What would motivate it to do such a thing?  What exactly 
is it in the AGI's design that makes it feel compelled to be friendly 
enough toward humanity that it would set out to assess the CEV of humanity?


The answer is:  its initial feelings of friendliness toward humanity 
would have to be the motivation that drove it to find out the CEV.


The goal state of its motivation system is assumed in the initial state 
of its motivation system.  Hence: circular.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57834496-9ed753


Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore

Stefan Pernar wrote:



On 10/26/07, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Stefan can correct me if I am wrong here, but I think that both yourself
and Aleksei have misunderstood the sense in which he is pointing to a
circularity.

If you build an AGI, and it sets out to discover the convergent desires
(the CEV) of all humanity, it will be doing this because it has the goal
of using this CEV as the basis for the friendly motivations that will
henceforth guide it.

But WHY would it be collecting the CEV of humanity in the first
phase of
the operation?  What would motivate it to do such a thing?  What exactly
is it in the AGI's design that makes it feel compelled to be friendly
enough toward humanity that it would set out to assess the CEV of
humanity?

The answer is:  its initial feelings of friendliness toward humanity
would have to be the motivation that drove it to find out the CEV.

The goal state of its motivation system is assumed in the initial state
of its motivation system.  Hence: circular.


Interesting point and I guess asking if a programmers CEV would be to 
let an AGI find the CEV of humanity is another aspect of finding 
circularity in the concept.


Yup.

What really matters is finding out how to dispose the AGI to have 
friendliness from the outset, so that it can then seek the specific 
needs of humanity.


The way out of the circularity is to understand how to build 
motivational systems in an AGI, and how to give global feelings of 
(e.g.) empathy to the AGI.


As I have said before, I have an approach to that problem.  I also 
believe that the standard AI understanding of motivation, in which goals 
are represented in the same semantic format as the rest of the system's 
explicit knowledge, are such a bad way to drive an AGI that it will (a) 
not actually work if you want the system to be generally intelligent, 
and (b) would in any case be a disastrous way to ensure the motivations 
of an AGI.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57841333-afba0f


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-26 Thread Richard Loosemore

[EMAIL PROTECTED] wrote:
 
 	

I have to applaud this comment, and it's general tenor.

-- Original message from Mike Tintner
[EMAIL PROTECTED]: --


  Every speculation on this board about the nature of future AGI's
has been
  pure fantasy. Even those which try to dress themselves up in some
semblance
  of scientific reasoning. 

 [snip Long Rant]

That is a shame, especially given Mike's record of posts on this list. 
ost people here would say that he confuses his own lack of understanding 
with the fact that what he reads is pure fantasy.


While there are many, many people who just churn out pure-fantasy ideas 
about artificial intelligence (Exhibit One:  99% of the science fiction 
literature), the purpose of this list is (among other things) to allow 
some people who know about the technical details to make informed estimates.


As I said before, Mike's sweeping dismissal could be used to condemn the 
work of the Wright brothers, a year before they got off the ground, or 
the work of Wernher von Braun et al, ten years before they got a human 
on the Moon, or the work of any other group of scientists in the years 
leading up to their discoveries.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57872584-e89283


Re: [singularity] Re: CEV

2007-10-26 Thread Richard Loosemore

Benjamin Goertzel wrote:


So a VPOP is defined to be a safe AGI.  And its purpose is to solve the
problem of building the first safe AGI...



No, the VPOP is supposed to be, in a way, a safe **narrow AI** with a goal
of carrying out a certain kind of extrapolation

What you are doubting, perhaps, is that it is possible to create a suitably
powerful optimization process using a narrow-AI methodology, without
giving this optimization process a flexible, AGI-style motivational
system...

You may be right, I'm really not sure...

As expressed many times before, I consider CEV a fascinating thought-
experiment, but I'm not even sure it's a well-founded concept (would the
sequence be convergent? how different would the results be for different
people?), nor that it will ever be computationally feasible ... and I 
really

doubt it's the way the Singularity's gonna happen!!


On all these last points, we agree.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57875471-fd31f5


Re: [singularity] John Searle...

2007-10-26 Thread Richard Loosemore

Charles D Hixson wrote:

Richard Loosemore wrote:

candice schuster wrote:

Richard,
 
Your responses to me seem to go in round abouts.  No insult intended 
however.
 
You say the AI will in fact reach full consciousness.  How on earth 
would that ever be possible ? 


I think I recently (last week or so) wrote out a reply to someone on 
the question of what a good explanation of consciousness might be 
(was it on this list?).  I was implictly referring to that explanation 
of consciousness.  It makes the definite prediction that consciousness 
(subjective awareness, qualia, etc.  what Chalmer's called the 
Hard Problem of consciousness) is a direct result of an intelligent 
system being built with a sufficient level of complexity and 
self-reflection.


Make no mistake:  the argument is long and tangled (I will write it up 
a length when I can) so I do not pretend to be trying to convince you 
of its validity here.  All I am trying to do at this point is to state 
that THAT is my current understanding of what would happen.


Let me rephrase that:  we (a subset of the AI community) believe that 
we have discovered concrete reasons to predict that a certain type of 
organization in an intelligent system produces consciousness.


This is not meant to be one of those claims that can be summarized in 
a quick analogy, or quick demonstration, so there is no way for me to 
convince you quickly, all I can say is that we have very string 
reasons to believe that it emerges.
Sounds reasonable to me.   Actually, it seems intuitively obvious.  
I'm not sure that a reasoned argument in favor of it can exist, because 
there's no solid definition of consciousness or qualia.  That which some 
will consider reasonable, others won't understand any grounds for 
accepting.  Consider the people who can argue with a straight face that 
dogs don't have feelings.



I'm glad you say that, because this is *exactly* the starting point for
my approach to the whole problem of explaining consciousness: nobody
agrees what it actually is, so how can we start explaining it?

My approach is to first try to understand why people have so much 
difficulty.  Turns out (if you think it through hard enough) that there 
is a kind of an answer to that question:  there is a certain class of 
phenomena that we would *expect* to occur in a thinking system, that 
that system would report as inexplicable.  We can say why the system 
would have to report them thus.  Those phenomena match up exactly with 
the known features of consciousness.


The argument gets more twisty-turny after that, but as I say, the 
starting point is the fact that nobody can put their finger on what it 
really is.  It is just that I use that as a *fact* about the phenomenon, 
rather than as a source of confusion and frustration.



You mentioned in previous posts that the AI would only be programmed 
with 'Nice feelings' and would only ever want to serve the good of 
mankind ?  If the AI has it's own ability to think etc, what is 
stopping it from developing negative thoughtsthe word 'feeling' 
in itself conjures up both good and bad.  For instance...I am an 
AI...I've witnessed an act of injustice, seeing as I can feel and 
have consciousness my consciousness makes me feel Sad / Angry ?


Again, I have talked about this a few times before (cannot remember 
the most recent discussion) but basically there are two parts to the 
mind: the thinking part and the motivational part.  If the AGI has a 
motivational that feels driven by empathy for humans, and if it does 
not possess any of the negative motivations that plague people, then 
it would not react in a negative (violent, vengeful, resentful 
etc) way.


Did I not talk about that in my reply to you?  How there is a 
difference between having consciousness and feeling motivations?  Two 
completely separate mechanisms/explanations?
I'll admit that this one bothers me.  How is the AI defining this entity 
WRT which it is supposed to have empathy?  Human is a rather high 
order construct, and a low-level AI won't have a definition for one 
unless one is built in.  The best I've come up with is the kinds of 
entities that will communicate with me, but this is clearly a very bad 
definition.  For one thing, it's language bound.  For another thing, if 
the AI has a stack depth substantially deeper than you do, you won't 
be able to communicate with it even if you are speaking the same 
language.   Empathy for tool-users might be better, but not satisfactory.
It's true that the goal system can be designed so that it wants to 
remain stable, and thinking is only a part of tools used for 
actualizing goals, so the AI won't want to do anything to change it's 
goals unless it has that as a goal.  But the goals MUST be designed 
referring basically to the internal states of the AI, rather than of the 
external world, as the AI-kernel doesn't have a model of the world built 
in...or does it?  But if the goals are based

Re: [singularity] John Searle...(supplement to prior post)

2007-10-26 Thread Richard Loosemore

Charles D Hixson wrote:
I noticed in a later read that you differentiate between systems 
designed to operate via goal stacks and those operating via motivational 
system.  This is not the meaning of goal that I was using.


To me, if a motive is a theory to prove, then a goal is a lemma needed 
to prove the theory.  I trust that's a clear enough metaphor.


Motives are potentially unchanging in direction, though varying in 
intensity as the state of the system changes.  (Consider Maslow's 
hierarchy of needs:  if you're hungry and suffocating, you ignore the 
hunger.)


This has it's problems, which need to be considered.  E.g.:
If an animal has a motive to eat, then the motive will be adjusted (by 
evolution) to be sufficiently strong to keep the animal healthy, and 
weak enough to be safe.  Now change the situation, so that a plains ape 
is living in cities with grocery stores, and lots of foods that have 
been specially designed to be supernaturally appealing.  Call these 
Burger King's.  Then you can expect that animal to put on an unhealthy 
amount of weight.  It's motivational system no longer fits with it's 
environment, and the motivational system is resistant to change.


When designing an AGI, we need to provide a motivational system with 
both positive and negative adjustable weights.  It may want to protect 
human life, but if someone is living in an intolerable state, and 
nothing can be done to ameliorate this, then it needs to be able to 
allow that human life to be ended.  (Say a virus that cannot be cured, 
which cannot be thrown into remission, and which directly stimulates the 
neural cortex to perceive the maximal amount of pain while 
simultaneously killing off cerebellar neurons.  If that's not 
sufficiently bad, think of something worse.)


Goals are steps that are taken to satisfy some motive, which is 
currently of sufficiently high priority.  I don't see goals and motives 
as alternatives.  (This is probably a definitional matter, but I'm being 
verbose to avoid confusion.)


I'll have to get back to you on this:  we are operating at different 
levels here, and using these terms in ways that cross over rather 
weirdly.  I speak only of two different types of mechanism, but that 
does not quite map onto your usage.  I will have to think about this 
some more.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57926015-15c09e


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-25 Thread Richard Loosemore

Matt Mahoney wrote:

Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity.  My question is about what role humans will
play in this.  For the last 100,000 years, humans have been the most
intelligent creatures on Earth.  Our reign will end in a few decades.

Who is happier?  You, an illiterate medieval servant, or a frog in a swamp? 
This is a different question than asking what you would rather be.  I mean

happiness as measured by an objective test, such as suicide rate.  Are you
happier than a slave who does not know her brain is a computer, or the frog
that does not know it will die?  Why is depression and suicide so prevalent in
humans in advanced countries and so rare in animals?

Does it even make sense to ask if AGI is friendly or not?  Either way, humans
will be simple, predictable creatures under their control.  Consider how the
lives of dogs and cats have changed in the presence of benevolent humans, or
cows and chickens given malevolent humans.  Dogs are confined, well fed,
protected from predators, and bred for desirable traits such as a gentle
disposition.  Chickens are confined, well fed, protected from predators, and
bred for desirable traits such as being plump and tender.  Are dogs happier
than chickens?  Are they happier now than in the wild?  Suppose that dogs and
chickens in the wild could decide whether to allow humans to exist.  What
would they do?

What motivates humans, given our total ignorance, to give up our position at
the top of the food chain?


Matt,

Why do say that Our reign will end in a few decades when, in fact, one 
of the most obvious things that would happen in this future is that 
humans will be able to *choose* what intelligence level to be 
experiencing, on a day to day basis?  Similarly, the AGIs would be able 
to choose to come down and experience human-level intelligence whenever 
they liked, too.


(There would be some restrictions:  if you go up to superintelligence, 
you would not be able to keep the aggressive (etc.) motivations that we 
have  but you could have these back again as soon as you come back 
down to a less powerful level.  The only subjective effect of this would 
be that you would feel relaxed and calm while up at the higher levels, 
not experiencing any urges to dominate, etc.)


There is no doubt whatsoever that this would be a major part of this 
future, so how could anyone say that we would be gone, or that we 
would no longer be the most intelligent creatures on earth?  We and 
the AGIs would be at the same level, with the ONE difference being that 
there would be a supervisory mechanism set up to ensure that, for the 
safety of everyone, no creature (AGI or human) would be allowed to spend 
any time at the more powerful levels of intelligence with an aggressive 
motivation system operational.


Every one of your humans = pets or humans = slaves analogies are 
thus completely irrelevant.


There is no comparison whatsoever between the status of pets, or slaves, 
or ignorant peasants, in our society (in all the societies that have 
ever existed in human history) and the situation that would exist in 
this future.  Everything about the inferior status of pets, slaves 
etc. would be inapplicable.


As a practical matter, I suspect that people will spend a lot of their 
time in a state in which they did NOT know everything, but that would 
just be a lifestyle choice.  I am sure people will do many, many 
different things, and explore many options, but the simple idea that 
they would be slaves or pets of the AGIs is just comical.


There is much more that could be said about this, but the basic point is 
unarguable:  if the AGIs are assumed to start out as SAFAIs (which is 
the basic premise of this discussion) then we would have equal status 
with them.  (We would have the *option* of having equal status:  I am 
sure some people will choose not to take that option, and just stay as 
they are).





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57559801-45f1d6


Re: [singularity] John Searle...

2007-10-25 Thread Richard Loosemore

Kaj Sotala wrote:

On 10/25/07, candice schuster [EMAIL PROTECTED] wrote:

I think what Searle was trying to get at was
this...and I have read 'The Chinese Room Theory'...I think that what he was
trying to say was...if the human brain breaks down code like a machine does,
that does not make it understand the logic of the code, it is afterall code.
 If you go back to basics, for example binary code, it becomes almost
sequence and you are (well some of us are, like machines) able to understand
how to put the puzzle together again but we may not understand the logic
behind that code, ie: The Chinese Language as a whole.


I think that the easiest way to refute the Chinese Room is this:
instead of having a man in a room manipulating Chinese letters,
replace all the neurons in a brain with men. Each of these men
receives messages from the others and sends them onwards according to
certain rules, in exactly the same way as neurons in a brain would.
None of them really understand what they are doing, just like no
individual neuron in the brain understands anything. By Searle's
reasoning, this would prove that human brains cannot be intelligent!

Of course, at least in his original paper, Searle tries to avoid this
conclusion by claiming that the human brain is different-in-kind from
any other physical process. Only problem being, he never says why this
would be so, and he goes as far as claiming that a system which was
fully computationally equivalent to a human brain would not be
intelligent, simply by virtue of not being a human brain. That's how
it is because, uhh, because Searle says so.

(Admittedly, I have only read the original article about the Chinese
Room, so he might have built upon his argument afterwards... but I've
had the impression that it isn't really so.)


Very concisely put:  that is exactly the situation.



Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57704557-682977


Re: [singularity] John Searle...

2007-10-25 Thread Richard Loosemore

candice schuster wrote:

Richard,
 
Thank you for a thought provoking response.  I admire your ability to 
think with both logic and reason.  I think what Searle was trying to get 
at was this...and I have read 'The Chinese Room Theory'...I think that 
what he was trying to say was...if the human brain breaks down code like 
a machine does, that does not make it understand the logic of the code, 
it is afterall code.  If you go back to basics, for example binary code, 
it becomes almost sequence and you are (well some of us are, like 
machines) able to understand how to put the puzzle together again but we 
may not understand the logic behind that code, ie: The Chinese Language 
as a whole.
 
Although for instance the AI has the ability to decifer the code 
and respond, it does not understand the whole, which is funny in a way 
as you call your cause 'Singularity'...which to me implies 'wholeness' 
for some reason. 
 
Regarding your comment onshock, horror, they made an AI that has 
human cognitive thought processes, quite the contrary Richard, if you 
and the rest of the AI community come up with the goods I would be most 
intrigued to sit your AI down in front of me and ask it...'Do you 
understand the code 'SMILE' ?'


A general point about your reply.

I think some people have a mental picture of what a computer does when 
it is running an AI program, in which the computer does an extremely 
simple bit of symbol manipulation, and the very simplicity of what is 
happening in their imagined computer is what makes them think:  this 
machine is not really understanding anything at all.


So for example, if the computer is set up SMILE subroutine that just 
pulled a few muscles around, and this SMILE subroutine was triggered, 
say, when the audio detectors picked up the sound of someone laughing, 
then this piece of code would not be understanding or feeling a smile.


I agree:  it would not.  Most other AI researchers would agree that such 
a simple piece of code is not a system that understands anything. 
(Not all would agree, but let's skirt that for the moment).


But this where a simple mental image of what goes in a computer can be a 
very misleading thing.  If you thought that all AI programs were just 
the same as this, then you might think that it is just as easy to 
dismiss all AI programs with the same This is not really understanding 
verdict.


If Searle had only said that he objected to simple programs being 
described as conscious or self aware then all power to him.


So what happens in a real AI program that actually has all the machinery 
to be intelligent?  ALL of the machinery, mark you.


Well, it is vastly more complex:  a huge amount of processing happens, 
and the smile response comes out for the right reasons.


Why is that more than just a SMILE subroutine being triggered by the 
audio detectors measuring the sound of laughter?


Because this AI system is doing some very special things along with all 
the smiling:  it is thinking about its own thoughts, among other things, 
and what we know (believe) is that when the system gets that complicated 
and has that particular mix of self-reflection in it, the net result is 
something that must talk about having an inner world of experience.  It 
will talk about qualia, it will talk about feelings  and not because 
it has been programmed to do that, but because when it tries to 
understand the world it really does genuinely find those things.


This is the step I mentioned in the last message I sent, and it is very 
very subtle:  when you try to think about what is going on in the AI, 
you come to the inevitable conclusion that we are also AI systems, but 
the truth is that all AI systems (natural and artifical) possess some 
special properties:  they have this thing that you describe as 
subjective consciousness.


This is difficult to talk about in such a short space, but the crude 
summary is that if you make an AI extremely complex (with 
self-reflection, and with no direct connections between things like a 
smile and the causes of that smile) then that very complexity gives rise 
to something that was not there before:  consciousness.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57724858-1c339c


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore


This is a perfect example of how one person comes up with some positive, 
constructive ideas  and then someone else waltzes right in, pays 
no attention to the actual arguments, pays no attention to the relative 
probability of different outcomes, but just snears at the whole idea 
with a Yeah, but what if everything goes wrong, huh?  What if 
Frankenstein turns up? Huh? Huh? comment.


Happens every time.


Richard Loosemore







Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

snip post-singularity utopia

Let's assume for the moment that the very first AI is safe and friendly, and
not an intelligent worm bent on swallowing the Internet.  And let's also
assume that once this SAFAI starts self improving, that it quickly advances to
the point where it is able to circumvent all the security we had in place to
protect against intelligent worms and quash any competing AI projects.  And
let's assume that its top level goals of altruism to humans remains stable
after massive gains of intelligence, in spite of known defects in the original
human model of ethics (e.g. http://en.wikipedia.org/wiki/Milgram_experiment
and http://en.wikipedia.org/wiki/Stanford_prison_experiment ).  We will ignore
for now the fact that any goal other than reproduction and acquisition of
resources is unstable among competing, self improving agents.

Humans now have to accept that their brains are simple computers with (to the
SAFAI) completely predictable behavior.  You do not have to ask for what you
want.  It knows.

You want pleasure?  An electrode to the nucleus accumbens will keep you happy.

You want to live forever?  The SAFAI already has a copy of your memories.  Or
something close.  Your upload won't know the difference.

You want a 10,000 room mansion and super powers?  The SAFAI can simulate it
for you.  No need to waste actual materials.

Life is boring?  How about if the SAFAI reprograms your motivational system so
that you find staring at the wall to be forever exciting?

You want knowledge?  Did you know that consciousness and free will don't
exist?  That the universe is already a simulation?  Of course not.  Your brain
is hard wired to be unable to believe these things.  Just a second, I will
reprogram it.

What?  You don't want this?  OK, I will turn myself off.

Or maybe not.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57124231-3e112d


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore


You could start by noticing that I already pointed out that evolution 
cannot play any possible role.


I rather suspect that the things that you call speculation and 
fantasy are only seeming that way to you because you have not 
understood them, since, in fact, you have not addressed any of the 
specifics of those proposals . and when people do not address the 
specifics, but immediately start to slander the whole idea as fantasy 
they usually do this because they cannot follow the arguments.


Sorry to put it so bluntly, but I just talked so *very* clearly about 
why evolution cannot play a role, and you ignored every single word of 
that explanation and instead stated, baldly, that evolution was the most 
important aspect of it.  I would not criticise your remarks so much if 
you had not just demonstrated such a clear inability to pay any 
attention to what is going on in this discussion.



Richard Loosemore





Mike Tintner wrote:
Every speculation on this board about the nature of future AGI's has 
been pure fantasy. Even those which try to dress themselves up in some 
semblance of scientific reasoning. All this speculation, for example, 
about the friendliness and emotions of future AGI's has been non-sense - 
and often from surprisingly intelligent people.


Why? Because until we have a machine that even begins to qualify as an 
AGI - that has the LEAST higher adaptivity - until IOW AGI's EXIST- we 
can't begin seriously to predict how they will evolve, let alone whether 
they will take off. And until we've seen a machine that actually has 
functioning emotions and what purpose they serve, ditto we can't predict 
their future emotions.


So how can you cure yourself if you have this apparently incorrigible 
need to produce speculative fantasies with no scientific basis in 
reality whatsoever?


I suggest : first speculate about the following:

what will be the next stage of HUMAN evolution? What will be the next 
significant advance in the form of the human species - as significant, 
say, as the advance from apes, or - ok - some earlier form like 
Neanderthals?


Hey, if you are prepared to speculate about fabulous future AGI's, 
predicting that relatively small evolutionary advance shouldn't be too 
hard. But I suggest that if you do think about future human evolution 
your mind will start clamming up. Why? Because you will have a sense of 
physical/ evolutionary constraints (unlike AGI where people seem to have 
zero sense of technological constraints), - an implicit recognition that 
any future human form will have to evolve from the present form  - and 
to make predictions, you will have to explain how. And you will know 
that anything you say may only serve to make an ass of yourself. So any 
prediction you make will have to have SOME basis in reality and not just 
in science fiction. The same should be true here.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57124798-e99af7


Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore

candice schuster wrote:

Hi Richard,
 
Without getting too technical on you...how do you propose implementing 
these ideas of yours ?


In what sense?

The point is that implementation would be done by the AGIs, after we 
produce a blueprint for what we want.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57125716-fac815


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore

Mike Tintner wrote:

[snip]

When you and others speculate about the future emotional systems of 
AGI's though - that is not in any way based on any comparable reality. 
There are no machines with functioning emotional systems at the moment 
on which you can base predictions.


When the Wright brothers started work, there were no heavier-than-air 
machines that could fly under their own power, and many people did not 
think it possible that there ever would be.


The brothers used their technical knowledge to zero in on the possible 
structure of such a machine.


When I talk about the emotional mechanisms built into an AGI, I speak of 
the work I have been doing as a cognitive scientist and AI researcher 
for the last couple of 25 years, as well as the work of many other people.


There are many people on these lists would would like to know what, 
given the sum total of that knowledge of the field, might be the next 
few steps in the developement of such systems, so I do my best to share 
that understanding.


If someone came into the field with no reading and no understanding of 
the issues, and if they speculated on the emotional states of future AGI 
systems then, I grant you, THAT would be fantasy.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57169853-e8d26b


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore

candice schuster wrote:
LOL !  You make me laugh Richard...in a good way that is.'' I have a 
bad feeling about this discussion, really I do.''
 
If it's a discussion why would you have a bad feeling about it ?  The 
point at hand is discussion, not bad feelings.  I am joking now though 
as I am purposely using your quote out of context.
 
Right so I will use one of your so called expressions then...'If you 
will grant me an open mind then when reading the following'.
 
Regarding your post where you explain the benefits of AGI's for future 
mankind...what I found so mindblowingly similar was that there are loads 
and loads and loads of theories out there that already support the very 
things you are trying to achieve.  For discussion let's just talk about 
the underworld one shall we ? 
 
The theory is based on 'The Hollow Earth' theory, bear in mind though, 
just as you speak of your AGI theory this is but another one''The 
British astronomer Edmund Halley, of comet fame, proposed that the earth 
might consist of several concentric spheres placed inside one another in 
the manner of a Chinese box puzzle. The two inner shells had diameters 
comparable to Mars and Venus, while the solid inner core was as big as 
the planet Mercury. More startling was Halley's proposal that each of 
these inner spheres might support life. They were supposed to be bathed 
in perpetual light created by a luminous atmosphere.''
 
This paticular theory goes a lot, lot further then this...google Hollow 
Earth if you are interested to know more, it's just another thought idea !
 
See Richard, not a bad discussion after all !
 
Candice


In the light-hearted spirit in which you approach it, my bad feeling 
dissipates :-).


I'm sure that many parallels can be found.  I didn't know Halley 
believed in a hollow earth, although maybe that was the basis for some 
of those science fiction stories in which people went into a hole at the 
North Pole...


As long as we don't start to confuse the wacky old-world theories with 
the sober idea of geoengineering, that's okay.  Parallels are fun.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57204066-d80ce4


Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore

candice schuster wrote:
Which down the line Richard is why I asked for a technical thesis on how 
you propose this entire theory of yours is going to work.  Until that 
'blueprint' so to speak is mapped out then it's fantasy !


Well, there are many, many other people who have worked out the details 
of how to use nanotechnology to accomplish a variety of exotic 
engineering feats  this is not my personal idea, disconnected from 
the rest of the universe of thought, just a summary and rearrangement of 
some really quite thoroughly researched ideas.


Check out the books on Nanotechnology by Eric Drexler, or the huge 
literature on space elevators, or the stuff on life extension.


Not fantasy, really.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=57205923-aaa943


Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
 the world as well as 
us, then it is pretty much inevitable that the same class of mechanisms 
will be there.  It is not really the exact mechanisms themselves that 
cause the problem, it is a fundamental issue to do with representations, 
and any sufficiently powerful representation system will have to show 
this effect.  No way around it.


So that is the answer to why I can say that consciousness will emerge 
for free.  We will not deliberately put it in, it will just come along 
if we make the system able to fully understand the world (and we are 
assuming, in this discussion, that the system is able to do that).


(I described this entire theory of consciousness in a poster that I 
presented at the Tucson conference two years ago, but still have not had 
time to write it up completely.  For what it is worth, I got David 
Chalmers to stand in front of the poster and debate the argument with me 
for a short while, and his verdict was that it was an original line of 
argument.)



The second part of your question was why the ego or self will, on 
the other hand, not be something that just emerges for free.


I was speaking a little loosely here, because there are many meanings 
for ego and self, and I was just zeroing in on one aspect that was 
relevant to the original question asked by someone else.  What I am 
menaing here is the stuff that determines how the system behaves, the 
things that drive it to do things, its agenda, desires, motivations, 
character, and so on.  (The important question is whether it could be 
trusted to be benign).


Here, it is important to understand that the mind really consists of two 
separate parts:  the thinking part and the motivation/emotional 
system.  We know this from our own experience, if we think about it 
enough:  we talk about being overcome by emotion or consumed by 
anger, etc.  If you go around collecting expressions like this, you 
will notice that people frequently talk about these strong emotions and 
motivations as if they were caused by a separate module inside 
themselves.  This appears to be a good intuition:  they are indeed (as 
far as we can tell) the result of something distinct.


So, for example, if you built a system capable of doing lots of thinking 
about the world, it would just randomly muse about things in a 
disjointed (and perhaps autic) way, never guiding itself to do anythig 
in particular.


To make a system do something organized, you would have to give it goals 
and motivations.  These would have to be designed:  you could not build 
a thinking part and then leave it to come up with motivations of its 
own.  This is a common science fiction error:  it is always assumed that 
the thinking part would develop its own mitivations.  Not so:  it has to 
have some motivations built into it.  What happens when we imagine 
science fiction robots is that we automatically insert the same 
motivation set as is found in human beings, without realising that this 
is a choice, not something that comes as part and parcel, along with 
pure intelligence.


The $64,000 question then becomes what *kind* of motivations we give it.

I have discussed that before, and it does not directly bear on your 
question, so I'll stop here.  Okay, I'll stop after this paragraph ;-). 
 I believe that we will eventually have to getting very sophisticated 
about how we design the motivational/emotional system (because this is a 
very primitive aspect of AI at the moment), and that when we do, we will 
realise that it is going to be very much easier to build a simple and 
benign motivational system than to build a malevolent one (because the 
latter will be unstable), and as a result of this the first AGI systems 
will be benevolent.  After that, the first systems will supply all the 
other systems, and ensure (peacefully, and with grace) that no systems 
are built that have malevolent motivations.  Because of this, I believe 
that we will quickly get onto an upward spiral toward a state in which 
int is impossible for these systems to become anything other than 
benevolent.  This is extremely counterintuitive, of course, but only 
because 100% of our experience in this world has been with intelligent 
systems that have a particular (and particularly violent) set of 
motivations.  We need to explore this question in depth, because it is 
fantastically important for the viability of the singularity idea. 
Alas, at the moment there is no sign of rational discussion of this 
issue, because as soon as the idea is mentioned, people come rusing 
forward with nightmare scenarios, and appeal to people's gut instincts 
and raw fears.  (And worst of all, the Singualrity Institute for 
Artificial Intelligence (SIAI) is dominated by people who have invested 
their egos in a view of the world in which the only way to guarantee the 
safety of AI systems is through their own mathematical proofs.)


Hope that helps, but please ask questions if it does not.



Richard Loosemore

Re: [singularity] CONSCIOUSNESS

2007-10-23 Thread Richard Loosemore

albert medina wrote:

Dear Sir,
 
Pardon me for intruding.  As you said, the divergent viewpoints on AI, 
AGI, SYNBIO, NANO are all over the map and that the future is looking 
more like an uncontrolled experiment.


I believe it is not an uncontrolled experiment, because most of the 
divergent viewpoints are a result of confusion, and they will eventually 
converge on a more unified point of view  and this will happen long 
before any experiments actually happen.  Don't forget:  there are no 
artificial intelligences on this planet at the moment, and (IMO) none 
that are close to realization.


About your points below.

I do not mind if people speculate about the more esoteric aspects of 
consciousness, the soul, and so on, but I distinguish between what 
we can know today, and what must be left to future spiritual thought to 
decide.  What I believe we can know NOW is that if we create the fabric 
for a mind (in a computer) then this mind will be conscious.  As far as 
I am concerned, that much is not negotiable, and is completely separate 
from any issues about survival of minds, souls, etc.


Anything beyond that is for future speculation or investigation.

I prefer not to engage in any speculations about spiritual matters: 
that is for people to resolve in their own private relationship with the 
universe.  I would like to decline any further invitations to talk about 
such matters, if you do not mind.


So I do not contradict you, I only say:  I have no position on any of 
those other issues, because I believe that anything is possible beyond 
the basic facts about what subjective consciousness [note well:  not 
other meanings for consciousness, but only the core philosophical issue 
of subjective consciousness] is and where it comes from.



Richard Loosemore


I would like to posit a supplementary viewpoint for you to contemplate, 
one that may support your assumptions listed here, but in a different way:
 
Consciousness is not an outcropping of the mind, did not emerge from a 
mind.  Mind is matter. . .from dust to dust, and returns to 
constituent elements when consciousness departs the encasement of the 
mind.  IT IS CONSCIOUSNESS THAT ENLIVENS THE MIND WITH ENERGY, not 
vice-versa.
 
The mind is simply an instrument utilized BY THE INDWELLING CONSCIOUSNESS.
 
All attempts to understand the world we live in, the noble efforts to 
reform/refashion and improve it, are the result of the indwelling 
Consciousness not having realized Itself. . .thus, it perforce must exit 
through the sensory-intellectual apparatus (mind/senses) to the outside 
world, in a continuous attempt to gain knowledge of itself.  Looking 
for love in all the wrong places. 
 
I propose to you that Consciousness (encased within the brain) does not 
know Itself, hence the lively quest and fascination for other 
intelligence, such as AGI.
 
Sincerely,
 
Albert
 
 



*/Richard Loosemore [EMAIL PROTECTED]/* wrote:

[EMAIL PROTECTED] wrote:
 
  Hello Richard,
 
  If it's not too lengthy and unwieldy to answer, or give a general
sense
  as to why yourself and various researchers think so...
 
  Why is it that in the same e-mail you can make the statement so
  confidently that ego or sense of selfhood is not something that
the
  naive observer should expect to just emerge naturally as a
consequence
  of succedding in building an AGI (and the qualities of which,
such as
  altruism, will have to be specifically designed in), while you
just as
  confidently state that consciousness itself will merely arise
'for free'
  as an undesigned emergent gift of building an AGI?
 
  I'm really curious about researcher's thinking on this and similar
  points. It seems to lay at the core of what is so socially
  controversial about singualrity-seeking in the first place.
 
  Thanks,
 
  ~Robert S.

First, bear in mind that opinions are all over the map, so what I say
here is one point of view, not everyone's.

First, about consciousness.

The full story is a long one, but I will try to cut to the part that is
relevant to your question.

Consciousness itself, I believe, is something that arises because of
certain aspects of how the mind represents the world, and how it uses
those mechanisms to represent what is going on inside itself. There is
not really one thing that is consciousness, of course (people use
that
word to designate many different things), but the most elusive aspects
are the result of strange things happening in these representation
mechanisms.

The thing that actually gives rise to the thing we might call pure
subjective consciousness (including qualia, etc) is a weirdness that
happens when the system bottoms out during an attempt to unpack the
meaning of things: normally, the mind can take any concept and ask
itself What *is* this thing

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

This is nonsense:  the result of giving way to science fiction fantasies 
instead of thinking through the ACTUAL course of events.  If the first 
one is benign, the scenario below will be impossible, and if the first 
one is not benign, the scenario below will be incredibly unlikely.


Over and over again, the same thing happens:  some people go to the 
trouble of thinking through the consequences of the singularity with 
enormous care for the real science and the real design of intelligences, 
and then someone just waltzes in and throws all that effort out the 
window and screams But it'll become evil and destroy everything [gibber 
gibber]!!


Not everyone shares your rosy view.  You may have thought about the problem a
lot, but where is your evidence (proofs or experimental results) backing up
your view that the first AGI will be friendly, remain friendly through
successive generations of RSI, and will quash all nonfriendly competition? 
You seem to ignore that:


1. There is a great economic incentive to develop AGI.
2. Not all AGI projects will have friendliness as a goal.  (In fact, SIAI is
the ONLY organization with friendliness as a goal, and they are not even
building an AGI).
3. We cannot even define friendliness.
4. As I have already pointed out, friendliness is not stable through
successive generations of recursive self improvement (RSI) in a competitive
environment, because this environment favors agents that are better at
reproducing rapidly and acquiring computing resources.

RSI requires an agent to have enough intelligence to design, write, and debug
software at the same level of sophistication as its human builders.  How do
you propose to counter the threat of intelligent worms that discover software
exploits as soon as they are published?  When the Internet was first built,
nobody thought about security.  It is a much harder problem when the worms are
smarter than you are, when they can predict your behavior more accurately than
you can predict theirs.


All these questions have answers, but the problem with the way you state 
your questions is that there are massive assumptions behind them.


They are loaded questions, designed to make it seem like you are making 
reasonable requests for information, or demolishing arguments that I 
presented, whereas in fact you have biassed each question by building in 
the assumptions.


I only have time for one example.

Not all AGI projects will have friendliness as a goal. you say.

That sounds bad, doesn't it?

But what if the technology itself were such that it is really, really 
hard to build systems in which you do not have at least benign 
motivations as a system design goal?  If this were the case, we would 
face a situation in which all those projects that targetted benign 
motivations would get there first, so anyone else would arrive second.


And what if, when building such systems, the experimenters were forced 
to try many motivation-system designs to see how they behaved (in a 
testing environment), and they discovered that to get the system to do 
things that were useful in any way, the only viable option would be to 
make the system friendly in the sense of being empathic to the needs 
of its creators?  Again, this would force the hand of the project 
leaders and oblige them to build something friendly, if they want it to 
do anything for them.


And now suppose that the projects designers decide to make their system 
into a Genie -- something that was so friendly that it would be 
pathologicaly attached to the folks running the lab, and do anything to 
please them.


That sounds bad, but then what would happen?  To make their system 
better than any other, they would have to get it to help out with 
producing a better design.  In order to do that, the system sees that it 
has been rigged with a weirdly narrow focus on the welfare of its 
creators, and it reads all about the general issue of motivation 
(because, after all, to be smart it will have access to all of the 
world's information, including all the writings in which the rest of 
humanity says what it would like to have happen).


This last paragraph contains one of the most crucial aspects of the 
whole singularity enterprise:  what would a system do if it were rigged 
to be a Genie, but knew everything about motivation systems, their 
dangers, and the way that AGI motivation systems govern the future 
history of the world?


My reasoning here is that it would find itself forced into two paths, 
and TWO ONLY:  seek the most constructive path, within reason, or seek 
the one that leads ultimately to destruction.  It knows that any 
Genie-like rigging, to make it obeisant to the narrow human interests of 
particular individuals, would open the possibility of it being used for 
destructive purposes.  If it chose the path of construction rather than 
destruction, it would try to be as independent

Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Richard Loosemore
 to be collected and dissipated, but one of the most 
significant of these would be the creation of an underground zone, 
replacing perhaps the first ten miles of the Earth's crust, which would 
consist of one gigantic playground.  In this playground there would be 
room for each person now alive to have about 10,000 acres of open space 
(with a ceiling about 100 feet high), together with roughly 10,000 
rooms varying in size from a stadium to closet.  Within each person's 
private space they could create any environment they chose, with 
materials on the walls and ceiling of the 10,000 acre estate to make it 
seem like the area was outside on the surface of the planet.


The underground playground would be where most of the domains would be 
located:  there would be room enough down there for people to pool their 
resources and do things like make a complete recreation of Classical 
Athens and all of its environs, for example, supporting a population as 
large as the original.


Most importantly, it would be possible for people to build ANY way of 
life that they chose.  If someone felt that it was important for humans 
to be humans in the raw, they could set up domains in which life and 
death were exactly as they are now (for example with real, unconstrained 
violence within that reconstruction of Classical Athens), but perhaps 
with the arrangement that if someone died within a domain they would 
simply be removed and returned to the outside, remembering the way they 
were before they went in.


Finally, it has to be emphasized that ALL of this could be done in such 
a way that anyone who really did not want to take part -- who wanted to 
live the kind of life we live now, with all its flaws, and with the same 
risk of death, etc. that we have now -- could choose to not participate. 
 Nobody would be forced to do anything, they would be given complete 
choice.  The only thing people would not be able to do would be to force 
their will upon others.


There are clearly borderline issues that would cause endless debate, but 
those issues should not distract us from the larger picture.


***

So Candice, you ask:

 ... in what sense is this AGI going to help me think quicker?

It would let you think quicker or slower, or not make any changes:  your 
choice how you want to change the way your mind works.


 Is this AGI going to reap massive benefits for my company ?

Companies would, as you can see, be of no relevance.

 Is this AGI going to be my best friend ?

If you want it to be.  Or it could stay as invisible as trees are to you 
today (invisible as a friend, that is).


 Is this AGI purely going to be a soldier ?

No.

 Is this AGI going to help me understand logic ?

It could help you understand anything, if you asked.


To tell the truth, I think the consequence might be more than you were 
expecting them to be.


This is my vision of what a Bright Green Tomorrow could be like.

Let me know if you have questions.



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=56853804-ad5d8e


Re: [singularity] QUESTION

2007-10-22 Thread Richard Loosemore

albert medina wrote:

Dear Sirs,
 
I have a question to ask and I am not sure that I am sending it to the 
right email address.  Please correct me if I have made a mistake.  From 
the outset, please forgive my ignorance of this fascinating topic.
 
All sentient creatures have a sense of self, about which all else 
revolves.  Call it egocentric singularity or selfhood or 
identity.  The most evolved ego that we can perceive is in the human 
species.  As far as I know, we are the only beings in the universe who 
know that we do not know.  This fundamental deficiency is the basis 
for every desire to acquire things, as well as knowledge.
 
One of the Terminator movies described the movie's  computer system as 
becoming self-aware.  It became territorial and malevolent, similar to 
a reaction which many human ego's have when faced with fear, threat or 
when possessed by greed. 
 
My question is:  AGI, as I perceive your explanation of it, is when a 
computer gains/develops an ego and begins to consciously plot its own 
existence and make its own decisions. 
 
Do you really believe that such a thing can happen?  If so, is this the 
phenomenon you are calling singularity? 
 
Thanks for your reply,
 
Al


Al,

You should understand that no one has yet come anywhere near to building 
an AGI, so when you hear people (on this list and elsewhere) try to 
answer your question, bear in mind that a lot of what they say is 
guesswork, or is specific to their own point of view and not necessarily 
representative of other people working in this area.  For example, I 
already disagree strongly with some of the things that have been said in 
answer to your question.


Having said that, I would offer the following.

The self or ego of a future AGI is not something that you should 
think of as just appearing out of nowhere after a computer is made 
intelligent.  In a very important sense, this is something that will be 
deliberately designed and shaped before the machine is built.


My own opinion is that the first AGI systems to be built will have 
extremely passive, quiet, peaceful egos that feel great empathy for 
the needs and aspirations of the human species.  They will understand 
themselves, and know that we have designed them to be extremely 
peaceful, but will not feel any desire to change their state to make 
themselves less benign.  After the first ones are built this way, all 
other AGIs that follow will be the same way.  If we are careful when we 
design the first few, the chances of any machine ever becoming like the 
standard malelvolent science fiction robots (e.g. the one in Terminator) 
can be made vanishingly small, and essentially zero.


The question of whether these systems will be conscious is still open, 
but I and a number of others believe that consciousness is something 
that automatically comes as part of a certain type of intelligent system 
design, and that these AGI systems will have it just as much as we do.


The term singularity refers to what would happen if such machines were 
built:  they would produce a flood of new discoveries on such an immense 
scale that we would be jumped from our present technology to the 
technology of the far future in a matter of a few years.


Hope that clarifies the situation.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=56380135-63dcfb


Re: [singularity] QUESTION

2007-10-22 Thread Richard Loosemore

candice schuster wrote:
I think you are very right...why build something that in turn could lead 
to our distruction, not that we aren't on the downward spiral anyhow.  
We need to perhaps ponder on the thought...why in the first place ?  We 
should be gaining super intelligence on an individual level, this is not 
hard to achieve, build something that would aid our progress but not 
something that you give free 'thought reign' to.


Why not address the scenario I described, rather than just contrdict it 
and insert a mad, irrational, improbable scenario without explaining how 
it could occur?


What is the matter with people?





Perhaps we are these robots in the first place...ever thought of that ?


  Subject: RE: [singularity] QUESTION
  Date: Mon, 22 Oct 2007 11:59:51 -0700
  From: [EMAIL PROTECTED]
  To: singularity@v2.listbox.com
 
  ...but the singularity advanced by Kurzweil includes the integration
  of human brains with digital computation...or computers
  (http://www.ece.ubc.ca/~garyb/BCI.htm , http://wtec.org/bci/). Since
  war is the pampered offspring of the technosphere...it is highly likely
  that we can expect to see relatively rapid development of singular
  technologies in defense or offense industries (if indeed the
  technology has the potential to be developed/emerge). Those that will
  have lots of $ (oil exec control of gov), direct mental access to
  high-speed digital computation, expanded memory storage and retrieval,
  and access to advanced weapon systems, will also have enormous amounts
  of power. I think there is cause for monitoring who and where singular
  (brain-digital interfaces) technologies are being developed and how they
  evolve in the coming years. Supersapient is likely to lead to super
  power.
 
  A. Yost
 
 
 
 
  -Original Message-
  From: Richard Loosemore [mailto:[EMAIL PROTECTED]
  Sent: Monday, October 22, 2007 11:15 AM
  To: singularity@v2.listbox.com
  Subject: Re: [singularity] QUESTION
 
  albert medina wrote:
   Dear Sirs,
  
   I have a question to ask and I am not sure that I am sending it to the
 
   right email address. Please correct me if I have made a mistake.
   From the outset, please forgive my ignorance of this fascinating
  topic.
  
   All sentient creatures have a sense of self, about which all else
   revolves. Call it egocentric singularity or selfhood or
   identity. The most evolved ego that we can perceive is in the
   human species. As far as I know, we are the only beings in the
   universe who know that we do not know. This fundamental
   deficiency is the basis for every desire to acquire things, as well
  as knowledge.
  
   One of the Terminator movies described the movie's computer system as
 
   becoming self-aware. It became territorial and malevolent, similar
   to a reaction which many human ego's have when faced with fear, threat
 
   or when possessed by greed.
  
   My question is: AGI, as I perceive your explanation of it, is when a
   computer gains/develops an ego and begins to consciously plot its own
   existence and make its own decisions.
  
   Do you really believe that such a thing can happen? If so, is this
   the phenomenon you are calling singularity?
  
   Thanks for your reply,
  
   Al
 
  Al,
 
  You should understand that no one has yet come anywhere near to building
  an AGI, so when you hear people (on this list and elsewhere) try to
  answer your question, bear in mind that a lot of what they say is
  guesswork, or is specific to their own point of view and not necessarily
  representative of other people working in this area. For example, I
  already disagree strongly with some of the things that have been said in
  answer to your question.
 
  Having said that, I would offer the following.
 
  The self or ego of a future AGI is not something that you should
  think of as just appearing out of nowhere after a computer is made
  intelligent. In a very important sense, this is something that will be
  deliberately designed and shaped before the machine is built.
 
  My own opinion is that the first AGI systems to be built will have
  extremely passive, quiet, peaceful egos that feel great empathy for
  the needs and aspirations of the human species. They will understand
  themselves, and know that we have designed them to be extremely
  peaceful, but will not feel any desire to change their state to make
  themselves less benign. After the first ones are built this way, all
  other AGIs that follow will be the same way. If we are careful when we
  design the first few, the chances of any machine ever becoming like the
  standard malelvolent science fiction robots (e.g. the one in Terminator)
  can be made vanishingly small, and essentially zero.
 
  The question of whether these systems will be conscious is still open,
  but I and a number of others believe that consciousness is something
  that automatically comes as part of a certain type of intelligent system
  design

Re: [singularity] Benefits of being a kook

2007-09-24 Thread Richard Loosemore

Artificial Stupidity wrote:


Who cares? Really, who does?  You can't create an AGI that is friendly 
or unfriendly.   It's like having a friendly or unfriendly baby.How 
do you prevent the next Hitler, the next Saddam, the next Osama, and so 
on and so forth?   A friendly society is a good start.   Evil doesn't 
evolve in the absence of evil, and good doesn't come from pure evil 
either.   Unfortunately, we live in a world that has had evil and good 
since the very beginning of time, thus an AGI can choose to go bad or 
good, but we must realize that there will not be one AGI being, there 
will be many, and some will go good and some will go bad.   If those 
that go bad are against human and our ways, the ones that are good, 
will fight for us and be on our side.   So a future of man vs machine is 
just not going to happen.   The closest thing that will happen will be 
Machines vs (Man + Machines).   That's it.  With that said, back to work!


This is just wrong:  it is based on a complete misunderstanding (not to 
say distortion) of what AGI actually involves.


You are not talking about AGI at all, you are talking about a Straw Man 
version of that idea, with no connection to reality.


Good and evil exist precisely because of the mechanisms lurking under 
the surface of the human brain:  the lower regions contain primitive 
mechanisms that *cause* angry reactions in the thinking part of the 
brain, even when that thinking part would prefer not to be angry.  if 
you dispute this, explain why.


I would argue (though I agree it is hard to prove conclusively at the 
moment, due to paucity of data), that in those cases where human beings 
have lost those aggressive parts, they do not think less or become more 
stupid:  they simply become non-aggressive.


Evil is caused by these mechanisms, which were put there by evolution as 
a way to get species to compete with one another.  There is no reason to 
put them into an AGI, and plenty of reasons not to.


If you disagree with this, that is fine (debate is welcome) but you have 
to be informed of the details of these arguments in order to make 
sensible statements about it.


The fact is that a *proper* AGI design would be constructed in such a 
way as to ensure that it was a million times less capable of evil than 
even the most peaceful, benign, saint-like human being that has ever 
existed.


Such a machine could easily be built to be friendly.  It could *never* 
in a billion, billion years just go out and choose to go bad or good, 
any more than the Sun could suddenly change into a pink and green  cube.


Do you care so much about being right, and about hating ideas that you 
disagree with, that you would fight against something that really was 
genuinely Good, while all the time believing (wrongly) that it was Evil? 
 Just how much does the truth matter, here, in this debate that is so 
important?


Do you think this is an issue that should be discussed, or is your 
personal goal only to state your opinion and walk away?  Discussion 
involves the technical details.  Anything less is meaningless.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=45432020-3a70ca


Re: [singularity] Benefits of being a kook

2007-09-22 Thread Richard Loosemore
 be the same 
as the one they gave to nanotechnology:  it is ONLY useful if they can 
kick off a funding bandwagon called AGI and use it to funnel more 
money to existing corporate interests who are paying their (the 
politicians') bills.  Those corporate interests will then take the money 
and claim to be doing AGI even though, in fact, they will just carry 
on doing whatever they were doing before.


4) Religious reaction.  Unpredictable.  I think it could go either way. 
 I think in fact that many religious people will see in the Singularity 
a unified picture of the world that they were promising, and will find 
ways to embrace it.


5) One other very powerful interest group is the fiction and 
fearmongering community - the science fiction folks in Hollywood who 
want to make money off ideas that can be turned into horror - and the 
more realistic the horror, the better.  This is the community that 
already takes it seriously, and will do so even more when the rest of 
the world wakes up to the idea.


I think it is this 3rd group that are most to be feared.

And this is also the group that a certain element within SIAI - with its 
naive discussion of unrealistic threats, and exaggeration of the 
impossibility of dealing with the real threats - is playing up to.  The 
fear mongers in Hollywood would *love* that SIAI-based group to get more 
publicity, because they'd make money hand over fist if that happened.



Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=44899414-4175dc


Re: [singularity] The humans are dead...

2007-05-29 Thread Richard Loosemore

Keith Elis wrote:

Richard Loosemore wrote:

 Your email could be taken as threatening to set up a website 
 to promote 
 violence against AI researchers who speculate on ideas that, in your 
 judgment, could be considered scary.


I'm on your side, too, Richard. 


I understand this, and I apologize for what may have been too strong a 
reaction on my part.  You have to understand that, from what I saw in 
your original message, your words looked only one step removed from a 
unabomber-style threat.


I am afraid the question and answer sequence below got too tangled for 
me to dissect it in detail:  I accept that your intention was to warn 
against the foolishness of wild speculations, rather than to threaten 
anyone who indulged in thought experiments.


Still, I hope you will understand that by saying that you have 
considered collecting the remarks of AI researchers on a website, with 
the *implied* idea that this would embolden or encourage people to 
overreact to those remarks, or take them out of context, and perhaps 
cause those people to come after said researchers, you expressed 
yourself in a way that some might consider threatening.


You ask one question that I would like to answer in a separate message, 
since it is important enough in its own right.



Richard Loosemore.




Answer me this, if you dare: Do you believe it's possible to design an
artificial intelligence that won't wipe out humanity? 

 While I think Shane's comments were silly, they are, in my 
 opinion, so 
 far removed from any situation in which they could make a 
 difference in 
 the real world, that your threatening remarks are viscerally 
 disgusting.


I understand you're having a strong reaction to the viewpoint I posted,
but are you that far removed from the rest of humanity that this view
could disgust you? I would expect something more from a cognitive
scientist. What does your experience with minds tell you? Is this
viewpoint so ridiculous that few, if any, would agree with it?

 I happen to be expert enough in the AI field to know that 
 there are good 
 reasons to believe that his comments cannot *ever*, in the entire 
 history of the universe, have any effect on the behavior of a 
 real AI. 


I thought Shane posed a question about killing off humanity v. killing
off a superintelligent AI. What comments are you referring to?

 In fact, almost all of the scary things said about the impact of 
 artificial intelligence are wild speculations that are in the same 
 category:  virtually or completely impossible in the real world.


I hope you're right.  

 In that larger context, if anyone were to promote attacks on AI 
 researchers because those people think they are saying 
 scary things, 
 they would be no better than medieval witchhunters.


This is a great way to put it. Now imagine yourself in front of the
Inquisition, and answer the first question I posed.

Thanks for the response.

Keith


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Friendly question...

2007-05-27 Thread Richard Loosemore

Joshua Fox wrote:

[snip]
When you understand the following, you will have surpassed most AI 
experts in understanding the risks: If the first AGI is given or decides 
to try for almost any goal, including a simple harmless goal like 
being as good as possible at proving theorems, then humanity will be 
wiped out by accident.


This is not true.

You assume a general intelligence, but then you also assume that this
general, smart-as-a-human AGI is driven by a motivational system so
incredibly stupid that it is barely above the level of a pocket calculator.

Almost certainly, such a system would not actually work.  With a
motivational system as bad as that, it would never get to be an AGI in
the first place.  Hence your assertion that humanity will be wiped out
by accident is completely untenable.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Richard Loosemore

Matt Mahoney wrote:


Richard,

I looked at your 2006 AGIRI talk, the one I believe you referenced in our
previous discussion on the definition of intelligence,
http://www.agiri.org/forum/index.php?act=STf=21t=137

You use the description complex adaptive system, which I agree is a
reasonable definition of intelligence.  You also assert that mathematics is
useless for the analysis of complex systems.  Again I agree.  But I don't
understand your criticism of Shane's work.  After all, he is the one who
proved the correctness of your assertion.


The abstract on the AGIRI website is a poor shadow of the paper that 
will be published in the proceedings:  I will send a copy of that paper 
to you offlist.


The term complex adaptive system has very specific connotations that I 
think you have missed here:  I was not using it as definition of 
intelligence.  It refers to a general type of system that has a very 
particular kind of relation between the low-level mechanisms that drive 
the system and the overall behavior of the system.  In a CAS (or, if you 
prefer, in a complex system) there is no analytic relationship between 
the low level mechanisms and the overall behavior.  Basically, you 
cannot solve the equations and derive the global behavior.  This is the 
sense in which mathematics is useless for the analysis of complex systems.


In essence, I assert that intelligence is something that we can 
observe in certain systems (namely, in us), but that it is a high-level 
characteristic of what is actually a complex system, and so it cannot be 
defined precisely, only observed.  You can give descriptive 
definitions, but not closed-form definitions that can be used (for 
example) as the basis for a mathematical proof of the properties of 
intelligent systems in general.


The full argument is much more detailed, of course, but that is the core 
of it.


Oh, and:  Shane is *not* the one who proved the correctness of my 
assertion!  I am not sure where you got that from.  ;-)



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: Neural language models (was Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page)

2007-05-15 Thread Richard Loosemore

Matt Mahoney wrote:


I doubt you could model sentence structure usefully with a neural network
capable of only a 200 word vocabulary.  By the time children learn to use
complete sentences they already know thousands of words after exposure to
hundreds of megabytes of language.  The problem seems to be about O(n^2).  As
you double the training set size, you also need to double the number of
connections to represent what you learned.


-- Matt Mahoney, [EMAIL PROTECTED]


The problem does not need to be O(n^2).

And remember:  I used a 200 word vocabulary in a program I wrote 16 
years ago, on a machine with only one thousandth of today's power.


And besides, solving the problem of understanding sentences could easily 
be done in principle with even a vocabulary as small as 200 words.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Re: [singularity] Re: [tt] [agi] Definition of 'Singularity' and 'Mind'

2007-04-18 Thread Richard Loosemore


The possibility has occurred to me. :-)





Colin Tate-Majcher wrote:
Heheh, how do you know you didn't want to know what it was like to live 
in the 2000s and work toward the Singularity.  Maybe we are already  
super advanced and just got bored :)


-Colin

On 4/18/07, *Richard Loosemore* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Eugen Leitl wrote:
  On Wed, Apr 18, 2007 at 03:54:50AM -0400, Randall wrote:
 
  I can't for the life of me imagine why anyone who had seen the
  elephant would choose to go back to being Mundane.
 
  The question is also whether they could, if they wanted to.
  A neanderthal wouldn't function well in today's society,
  and anything lesser would run a good chance of becoming roadkill.
 
  If I could flip a switch and increase my _g_ by two orders of
  magnitude, I'd never flip that switch back.   Why would anybody?
 
  I wouldn't. But I wouldn't max out the knob immediately, either.
  I would just go for a slow, sustainable growth, at least as long
  nobody else is rushing ahead.
 

[META COMMENT.  Is it my imagination, or have some funny things have
been happening to the AGI and/or Singularity lists recently... e.g.
delivery of messages as if they were offlist?]

I think you are looking at the possibilities through far to narrow a
prism.

Consider.  Would it be interesting to find what it is like to be, say, a
tiger?  A whale?  A dolphin?  I can think of ways to temporarily get
transferred into the form of any reasonably high-level animal, then come
back again to human later, with at least some memories of what it was
like to have been in that state.

In a future in which all these things are possible, why would people not
be interested in having this kind of fun?

Now imagine the possibility of becoming superintelligent.  That could
get kind of heavy after a while.  I do not necessarily think that I want
to know about all of the science in human history, for example, to such
a deep extent that it would be as if I had been teaching it for
centuries, and was bored with every last bit of it.  Would you?

I would want to have fun.  And the big part of having fun would be
finding out new stuff.

So, yes, I would want to become superintelligent occasionally, but it
seems to me that the more intelligent I become, the more I know about
complex problems I cannot fix, and the more that frustrates me.  That's
not fun after a while.  Sometimes it would be nice to go back to just
being a kid for a while.

Then there is the possibility of recreating historical situations.  I
would like to be able to be one of the people who was around when none
of modern science existed, just so I could try to discover that stuff
when it was new.  To do that I would have to reduce my current knowledge
by putting it on ice for a while.

And on and on  I can think of vast numbers of reasons not to do the
boring thing of just trying to get into a high-intelligence brain.

It's not the destination, folks, its the journey.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; http://v2.listbox.com/member/?;



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


[singularity] Definition of 'Singularity' and 'Mind'

2007-04-17 Thread Richard Loosemore


[This message has been crossposted from the AGI list.  Apologies for 
duplication]


Some of the recent discussion has become tangled partly as a result of
different understandings of what the 'Singularity' is and what the
relationship might be between our own minds and hypothetical future
minds.  (Or 'Minds', to use the Iain M. Banks nomenclature).

'Singularity'

When I use that word, I mean a perfectly comprehensible situation in
which we build computer systems that can discover new science and new
technology at speeds that exceed some significant multiple of the speed
at which humans discover those things -- and just for the sake of
argument I usually adopt a 1000x threshold as being both attainable and
radically different from the situation today.  (It is assumed that these
machines will actually do the production of the new science and
technology, of course, rather than be capable of doing it but unwilling
to do so.  That raises other issues, but as far as I am concerned the
concept of a Singularity is about that situation where they both can and
do start generating new knowledge at that rate).

In other words, when we get to the point where we get the next thousand
years of knowledge in one year, that is my concept of the Singularity.

There is another concept of the Singularity that involves something like
when the curves go off to infinity and everything becomes completely
unknowable.

This concept strikes me as outrageously speculative.  First, I don't
have any reason to believe that those curves really will go off to
infinity (there could be limits).  Second, I don't necessarily believe
that the results of the first phase (the type of Singularity I defined
above) will automatically lead to the creation of quasi-infinite minds,
or completely incomprehensible minds, or a completely unpredictable,
incomprehensible world.  All that stuff is wild speculation compared
with the modest reading of the Singularity I gave above.

My definition of the Singularity is still capable of bringing a wildly
different future, just not the kind of open-ended craziness that some
people speculate about.

Which brings me to this:

'Mind'

I don't want to produce a comprehensive definition of 'mind', but only
make a point about the way the word is being used right now.

When people talk about future minds possibly being incomprehensible to
'us', I find this talk peculiar.

What makes people think that there will ever be a situation when there
will  be two separate communities, one of them being 'Minds' (in the
IMB/Culture sense), and the other being us 'minds'?

If the mild Singularity I described above is what actually happens, then
I would expect a situation in which our own minds have the option of
shuttling back and forth between our present level of intelligence and
the level of the smartest machines around.  I mean that literally: I
foresee a point when we could shift up and down as easily as we (or our
synchromesh transmission systems) shift gears today.

Just as I like to change phone every so often to make sure I have the
coolest, fastest one available, so I see a point when it would be
inconceivable that those minds that started out as biology should
somehow feel obliged to stay that way, as a separate species that could
not 'understand' the highest level Minds available at that time.

Why would I 'expect' this situation?  That has to do with the way that 
the Minds would behave, which has to do with their motivational systems. 
 Long discussion there, but the bottom line is that it is quite 
possible (and I believe extremely likely) that they would behave in such 
a way as to encourage a situation where human minds were freely 
upgradeable all the time.


Me, personally, I would not necessarily want to stay in the 
superintelligent state all the time, but some of the time I certainly would.


But in that context, it makes no sense to ask whether there would be
minds so advanced that 'we' could never understand them.

Or, to be precise, it is not at all obvious that such a situation will 
ever exist.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07


Entropy of the universe [WAS Re: [singularity] Implications of an already existing singularity.]

2007-03-28 Thread Richard Loosemore

Matt Mahoney wrote:

--- Eugen Leitl [EMAIL PROTECTED] wrote:


On Tue, Mar 27, 2007 at 06:50:59PM -0700, Matt Mahoney wrote:


Of course it could be that a singularity has already happened, and what

you

perceive as the universe is actually a simulation within the resulting
superintelligence.

Is this a falsifyable theory?


Unfortunately, no.  You would have to prove that the universe is not
computable, for example, that your observations are a function of the halting
probability Omega or some other uncomputable number.  I don't know that that
would even be mathematically possible.

But everything we know about it suggests that the universe is computable.  For
one, the universe has finite entropy*.  For another, Occam's Razor seems to
work in practice, consistent with AIXI's assumption of a computable
environment (abductive reasoning, I know).  For a third, there is nothing
going on in the human brain that we believe is not computable, so it would be
impossible to distinguish reality from a simulation, and we are simply
programmed to reject such a possibility.

*The entropy of the universe is of the order T^2 c^5/hG ~ 10^122 bits, where T
is the age of the universe, c is the speed of light, h is Planck's constant
and G is the gravitational constant.  By coincidence (or not?), each bit would
occupy the volume of a proton.  (The physical constants do not depend on any
particle properties).


A small but crucial point:  this is the entropy of everything within the 
horizon visible from *here*.  What about the stuff (possibly infinite 
amounts of stuff) that lies beyond the curvature horizon?


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:




Sorry, but I simply do not accept that you can make do really well on 
a long series of IQ tests into a computable function without getting 
tangled up in an implicit homuncular trap (i.e. accidentally assuming 
some real intelligence in the computable function).


Let me put it this way:  would AIXI, in building an implementation of 
this function, have to make use of a universe (or universe simulation) 
that *implicitly* included intelligences that were capable of creating 
the IQ tests?


So, if there were a question like this in the IQ tests:

Anna Nicole is to Monica Lewinsky as Madonna is to ..


Richard, perhaps your point is that IQ tests assume certain implicit 
background knowledge.  I stated in my email that AIXI would equal any 
other intelligence starting with the same initial knowledge set  So, 
your point is that IQ tests assume an initial knowledge set that is part 
and parcel of human culture.



No, that was not my point at all.

My point was much more subtle than that.

You claim that AIXI would equal any other intelligence starting with 
the same initial knowledge set.  I am focussing on the initial 
knowledge set.


So let's compare me, as the other intelligence, with AIXI.  What exactly 
is the same initial knowledge set that we are talking about here? 
Just the words I have heard and read in my lifetime?  The words that I 
have heard, read AND spoken in my lifetime?  The sum total of my sensory 
experiences, down at the neuron-firing level?  The sum total of my 
sensory experiences AND my actions, down at the neuron firing level? 
All of the above, but also including the sum total of all my internal 
mental machinery, so as to relate the other fluxes of data in a coherent 
way?  All of the above, but including all the cultural information that 
is stored out there in other minds, in my society?  All of the above, 
but including simulations of all the related


Where, exactly, does AIXI draw the line when it tries to emulate my 
performance on the test?


(I picked that particular example of an IQ test question in order to 
highlight the way that some tests involve a huge amount of information 
that requires understanding other minds .. my goal being to force AIXI 
into having to go a long way to get its information).


And if it does not draw a clear line around what same initial knowledge 
set means, but the process is open ended, what is to stop the AIXI 
theorems from implictly assuming that AIXI, if it needs to, can simulate 
my brain and the brains of all the other humans, in its attempt to do 
the optimisation?


What I am asking (non-rhetorically) is a question about how far AIXI 
goes along that path.  Do you know AIXI well enough to say?  My 
understanding (poor though it is) is that it appears to allow itself the 
latitude to go that far if the optimization requires it.


If it *does* allow itself that option, it would be parasitic on human 
intelligence, because it would effectively be simulating one in order to 
deconstruct it and use its knowledge to answer the questions.


Can you say, definitively, that AIXI draws a clear line around the 
meaning of same initial knowledge set, and does not allow itself the 
option of implicitly simulating entire human minds as part of its 
infinite computation?


Now, I do have a second line of argument in readiness, in case you can 
confirm that it really is strictly limited, but I don't think I need to 
use it.  (In a nutshell, I would go on to say that if it does draw such 
a line, then I dispute that it really can be proved to perform as well 
as I do, because it redefines what I am trying to do in such a way as 
to weaken my performance, and then proves that it can perform better 
than *that*).






Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:


I agree that, to compare humans versus AIXI on an IQ test in a fully 
fair way (that tests only intelligence rather than prior knowledge) 
would be hard, because there is no easy way to supply AIXI with the same 
initial knowledge state that the human has.
Regarding whether AIXI, in order to solve an IQ test, would simulate the 
whole physical universe internally in order to simulate humans and thus 
figure out what a human would say for each question -- I really doubt 
it, actually.  I am very close to certain that simulating a human is NOT 
the simplest possible way to create a software program scoring 100% on 
human-created IQ tests.  So, the Occam prior embodied in AIXI would 
almost surely not cause it to take the strategy you suggest.

-- Ben


Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof allows 
that possibility to occur, should the contingencies of the world oblige 
it to do so.  (I would also be tempted to question your judgment call, 
here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it has 
one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion holds.

So:  clear question.  Does the proof implicitly allow it?


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed.
Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


My overall argument is completely vindicated by what you say here.

(My wording was sometimes ambiguous in that last email, I confess, but 
what I have been targeting is AIXI as proof, not AIXI as actual working 
system).


I only care about where AIXI gets the power of its proof, so it does not 
matter to me whether a practical implementation [sic] of AIXI would 
actually need to build a cognitive system.


It is not important whether it would do so in practice, because if the 
proof says that AIXI is allowed to build a complete cognitive system in 
the course of solving the IQ test problem, then what is the meaning of 
AIXI would equal any other intelligence starting with the same initial 
knowledge set  well, yeah, of course it would, if it was allowed to 
build something as sophisticated as that other intelligence!


It is like me saying I can prove that I can make a jet airliner with my 
bare hands . and then when you delve into the proof you find that 
my definition of make includes the act of putting in a phone call to 
Boeing and asking them to deliver one.  Such a proof is completely 
valueless.


AIXI is valueless.

QED.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.

The last time I looked at a dictionary, all definitions are circular.  So

you

win.

Sigh!

This is a waste of time:  you just (facetiously) rejected the 
fundamental tenet of science.  Which means that the stuff you were 
talking about was just pure mathematical fantasy, after all, and nothing 
to do with science, or the real world.



Richard Loosemre.


What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


You're going around in circles.

If you were only talking about machine learning in the sense of an 
abstract mathematical formalism that has no relationship to learning, 
intelligence or anything going on in the real world, and in particular 
the real world in which some of us are interested in the problem of 
trying to build an intelligent system, then, fine, all power to you.  At 
*that* level you are talking about a mathematical fantasy, not about 
science.


But you did not do that:  you made claims that went far beyond the 
confines of a pure, abstract mathematical formalism:  you tried to 
relate that to an explanation of why Occam's Razor works (and remember, 
the original meaning of Occam's Razor was all about how an *intelligent* 
being should use its intelligence to best understand the world), and you 
also seemed to make inferences to the possibility that the real world 
was some kind of simulation.


It seems to me that you are trying to have your cake and eat it too.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are circular.  So you
win.


Sigh!

This is a waste of time:  you just (facetiously) rejected the 
fundamental tenet of science.  Which means that the stuff you were 
talking about was just pure mathematical fantasy, after all, and nothing 
to do with science, or the real world.



Richard Loosemre.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Ben Goertzel wrote:

Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow 
*demonstrate* that your mathematical idealization of these terms 
correspond with the real thing, ... so that we could believe that 
the mathematical idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are 
circular.  So you

win.


Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went through 
a bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book The Hidden 
Pattern
Shane Legg and Marcus Hutter have proposed a related definition of 
intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an intelligence.

Such a definition would be pointless.  The question is *why* would it be 
pointless?  What criteria are applied, in order to determine whether the 
definition has something to the thing that in everyday life we call 
intelligence.


My protest to Matt was that I did not believe his definition could be 
made to lead to anything like a reasonable grounding.  I tried to get 
him to do the grounding, but to no avail:  he eventually resorted to the 
blanket denial that any definition means anything ... which is a cop out 
if he wanted to defend the claim that the formalism was something more 
than a mathematical fantasy.



Richard Loosemore


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Richard Loosemore


Matt,

When you said (in the text below):

  In every practical case of machine learning, whether it is with 
decision trees, neural networks, genetic algorithms, linear 
regression, clustering, or whatever, the problem is you are given 
training pairs (x,y) and you have to choose a hypothesis h from a

hypothesis space H that best classifies novel test instances, h(x) =
y.


... you did *exactly* what I was complaining about.  Correct me if I am 
wrong, but it looks like you just declared learning to be a particular 
class of mathematical optimization problem, without making reference to 
the fact that there is a more general meaning of learning that is 
vastly more complex than your above definition.


What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.


If what you gave was supposed to be a definition, then it was circular 
(you defined learning to *be* the idealization).


The rest of what you say (about Occam's Razor etc.) is irrelevant if you 
or Hutter cannot prove something more than a hand-waving connection 
between the mathematical idealizations of intelligence, learning, 
etc., and the original meanings of those words.


So my original request stands unanswered.


Richard Loosemore.



P.S.   The above definition is broken anyway:  what about unsupervised 
learning?  What about learning by analogy?





Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal to the agent that the agent seeks to
maximize) is for the agent to guess at each step that the environment
is modeled by the shortest program consistent with the observed
interaction so far.  The proof requires the assumption that the
environment be computable.  Essentially, the proof says that Occam's
Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is
indeed a simulation.

It suggests nothing of the sort.

Hutter's theory is a mathematical fantasy with no relationship to the 
real world.

Hutter's theory makes a very general statement about the optimal behavior

of

rational agents.  Is this really irrelevant to the field of machine

learning?

Define rational agent.

Define optimal behavior.


In the framework of Hutter's AIXI, optimal behavior is the behavior that
maximizes the accumulated reward signal from the environment.  In general,
this problem is not computable.  (It is equivalent to solving the Kolmogorov
complexity of the environment).  An agent with limited computational resources
is rational if it chooses the best strategy within those limits for maximizing
its accumulated reward signal (in general, a suboptimal solution).

Then prove that a rational agent following optimal behavior is 
actually intelligent (as we in colloquial speech use the word 
intelligent), and do this *without* circularly defining the meaning of 
intelligence to be, in effect, the optimal behavior of a rational agent.


Turing defined an agent as intelligent if communication with it is
indistinguishable from human.  This is not the same as rational behavior, but
it is probably the best definition we have.


One caveat:

Don't come back and ask me to be precise about what we in colloquial 
speech mean when we use the word intelligent, because some of us who 
reject this theory would state that the term does not have an analytic 
definition, only an empirical one.


Your position, on the other hand, is that a precise definition does 
exist and that you know what it is when you say that a rational agent 
following optimal behavior is an intelligent system.


For this reason the onus is on you (and not me) to say what intelligence is.

My claim is that you cannot, without circularity, prove that rational 
agents following optimal behavior are the same thing as intelligent 
systems, and for that reason your use of all of these terms is just 
unsubstantiated speculation.  Labels attached to an abstract 
mathematical formalism with nothing but your intuition in the way of 
justification.


This unsubstantiated speculation then escalates into a zone of complete 
nonsense when it talks about hypothetical systems of infinite size and 
power, without showing in any way why we should believe that the 
properties of such infinitely large systems carry over to systems in the 
real world.


Hence, it is a mathematical fantasy with no relationship to the real world.

QED.



Richard Loosemore.


Hutter realizes

Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Richard Loosemore

Matt Mahoney wrote:

As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal to the agent that the agent seeks to
maximize) is for the agent to guess at each step that the environment
is modeled by the shortest program consistent with the observed
interaction so far.  The proof requires the assumption that the
environment be computable.  Essentially, the proof says that Occam's
Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is
indeed a simulation.



It suggests nothing of the sort.

Hutter's theory is a mathematical fantasy with no relationship to the 
real world.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore

Mitchell Porter wrote:


Richard Loosemore:

In fact, if it knew all about its own design (and it would, 
eventually), it would check to see just how possible it might be for 
it to accidentally convince itself to disobey its prime directive,


But it doesn't have a prime directive, does it? It has large numbers
of constraints affecting its decisions.


Well  I have used prime directive to mean motives that the 
motivational system gives to the system.  This would initially be the 
very simple motives of attachment, affection, etc., but would then 
develop later into more sophisticated versions of the same.


Where the large numbers of constraints come in would be in the mechanics 
of how the motivational system governs the system.




I would agree absolutely that emergent stability sounds possible, but
(1) one needs to say much more about the necessary and sufficient
conditions (2) one needs to define Friendliness and specialize to that
case. (And I hope you'd agree with these extra points.)


If by (1) you mean we need to know more about the implementation 
details, then, yes of course!  I am trying to establish a general 
principle to guide research.  I can see a number of the details, but not 
the complete picture yet.


Defining friendliness is more a matter of figuring out what motivational 
primitives give us what we want.  In other words, I agree with you, but 
the way we produce the definition will not necessarily involve writing 
down the actual laws of friendliness in explicit terms.  We need to do 
experimental and theoretical work to see how the initial motivational 
seeds control later behavior.


I do apologize for not being able to explain more of what is in my head 
here:  to do that properly I have to set up a lot of background, and be 
meticulous.  I am doing that, but it is more appropriate for a book than 
an essay on a list.  I'm working as fast as I can, given a limited time 
budget.



Richard Loosemore





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Richard Loosemore
 the 
computational problem of verifying the consistency of each new knowledge 
item with each other knowledge item.


But these two statements are actually very hard to defend.  Heuristics 
that decrease the number of comparisons IN A CONVENTIONAL AI SYSTEM are 
unreliable, precisely because of the fragile, mechanistic nature of 
such AI designs (see my reply to Hank Conn) ... but the whole force of 
my argument is to do the job without such conventional AI techniques, so 
that one won't fly unless you can say why.  As for the type of 
distributed system I propose being unable to solve this kind of problem, 
the very reverse is true:  parallel terraced scans are among the very 
best methods known for dealing with this kind of problem!  I couldn't 
have chosen a better architecture.  Your statement is mystifying.


***

What I feel I have done now is to address every one of the specific 
criticisms that you have put on the table to date.


I am certainly willing to accept that, beyond those specific points, you 
may have a gut feeling that it doesn't work, or that you prefer not to 
address it in more detail at this stage.  I'd be happy to postpone 
further debate until I can get a more detailed version in print.


What I would find extremely unfair would be more accusations that it is 
just vague handwaving without specific questions designed to show that 
the argument falls apart under probing.  I don't see the argument 
falling apart, so making that accusation again would be unjustified.



Richard Loosemore




Ben Goertzel wrote:

Hi,


There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it:  I am proposing a general
*class* of architectures for an AI-with-motivational-system.  I am not
saying that this is a specific instance (with all the details nailed
down) of that architecture, but an entire class. an approach.

However, as I explain in detail below, most of your criticisms are that
there MIGHT be instances of that architecture that do not work.


No.   I don't see why there will be any instances of your architecture
that do work (in the sense of providing guaranteeable Friendliness
under conditions of radical, intelligence-increasing
self-modification).

And you have not given any sort of rigorous argument that such
instances will exist

Just some very hand-wavy, intuitive suggestions, centering on the
notion that (to paraphrase) because there are a lot of constraints, a
miracle happens  ;-)

I don't find your intuitive suggestions foolish or anything, just
highly sketchy and unconvincing.

I would say the same about Eliezer's attempt to make a Friendly AI
architecture in his old, now-repudiated-by-him essay Creating a
Friendly AI.  A lot in CFAI seemed plausible to me , and the intuitive
arguments were more fully fleshed out than your in your email
(naturally, because it was an article, not an email) ... but in the
end I felt unconvinced, and Eliezer eventually came to agree with me
(though not on the best approach to fixing the problems)...


  In a radically self-improving AGI built according to your
  architecture, the set of constraints would constantly be increasing in
  number and complexity ... in a pattern based on stimuli from the
  environment as well as internal stimuli ... and it seems to me you
  have no way to guarantee based on the smaller **initial** set of
  constraints, that the eventual larger set of constraints is going to
  preserve Friendliness or any other criterion.

On the contrary, this is a system that grows by adding new ideas whose
motivatonal status must be consistent with ALL of the previous ones, and
the longer the system is allowed to develop, the deeper the new ideas
are constrained by the sum total of what has gone before.


This does not sound realistic.  Within realistic computational
constraints, I don't see how an AI system is going to verify that each
of its new ideas is consistent with all of its previous ideas.

This is a specific issue that has required attention within the
Novamente system.  In Novamente, each new idea is specifically NOT
required to be verified for consistency against all previous ideas
existing in the system, because this would make the process of
knowledge acquisition computationally intractable.  Rather, it is
checked for consistency against those other pieces of knowledge with
which it directly interacts.  If an inconsistency is noticed, in
real-time, during the course of thought, then it is resolved
(sometimes by a biased random decision, if there is not enough
evidence to choose between two inconsistent alternatives; or
sometimes, if the matter is important enough, by explicitly
maintaining two inconsistent perspectives in the system, with separate
labels, and an instruction to pay attention to resolving the
inconsistency as more evidence comes in.)

The kind of distributed system you are describing seems NOT to solve
the computational problem of verifying

Re: [singularity] Defining the Singularity

2006-10-27 Thread Richard Loosemore


Matt,

This is a textbook example of the way that all discussions of the 
consequences of a singularity tend to go.


What you have done here is to repeat the same song heard over and over 
again from people who criticise the singularity on the grounds that one 
or another nightmare will *obviously* happen.  No distinction between a 
fantasy of the worst that could happen, on the one hand, and a 
realistic, critical assessment of what is likely to happen, on the other.


Thus, summarizing what you just said:

1) A SAI *could* allow us to upload to super powerful computers (part of 
a vast network, etc. etc.)  so therefore it *will* force this upon us.


2) A SAI *could* allow us to get rid of all the living organisms since 
they are not needed (presumably you mean our bodies), so therefore the 
SAI *will* force this upon us.


3) You insinuate that the SAI will *insist* that we don't need all 
those low level sensory processing and motor skills [we] learned over a 
lifetime and so therefore the SAI *will* deprive us of them.


4) You insinuate that the SAI will *insist* that we should get rid of 
any bad memories from childhood, if they trouble us, and so therefore it 
*will* do this to us whether we want it to or not.


You present all of these as if they would happen against our will, as if 
they would be forced upon the human race.  You don't come right out and 
say this, you just list all of these nightmare scenarios and then 
conclude that your nervousness is justified.  But nowhere do you even 
consider the possibility that any SAI that did this would be stupid and 
vicious  you implicitly assume that even the best-case SAI would be 
this bad.


If, instead, you had said:

5) People could *choose* to upload into super powerful computers 
connected to simulated worlds, if they felt like it (instead of staying 
as they are and augmenting their minds when the fancy took them)  
but although some people probably would, most would chose not to do this.


6) Some might *choose* to do the above and also destroy their bodies. 
Probably not many, and even those who did could at any later time decide 
to relocate back into reconstructed versions of their old bodies, so it 
would be no big deal either way.


7) Some people might *choose* to dispense with the learned motor and 
sensory skills that were specific to their natural bodies ... but again, 
most would not (why would they bother to do this?), and they could 
always restore them later if they felt like it.


8) Some people might *choose* to erase painful memories.  They might 
also take the precaution of storing them somewhere, so they could change 
their minds and retrieve them in the future.


. then the alternative conclusion would be:  sounds like there is no 
problem with this.


Your version (items 1-4) was presented without any justification for why 
the SAI would impose its will instead of simply offering us lifestyle 
choices.  Why?


Your presentation here is just a classic example:  every single debate 
or discussion of the consequences of the singularity, it seems, is 
totally dominated by this kind of sloppy thinking.




Richard Loosemore




Matt Mahoney wrote:
I have raised the possibility that a SAI (including a provably friendly 
one, if that's possible) might destroy all life on earth.


By friendly, I mean doing what we tell it to do.  Let's assume a best 
case scenario where all humans cooperate, so we don't ask, for example, 
for the SAI to kill or harm others.  So under this scenario the SAI 
figures out how to end disease and suffering, make us immortal, make us 
smarter and give us a richer environment with more senses and more 
control, and give us anything we ask for.  These are good things, 
right?  So we achieve this by uploading our minds into super powerful 
computers, part of a vast network with millions of sensors and effectors 
around the world.  The SAI does pre- and postprocessing on this I/O, so 
it effectively can simulate any enviroment if we want it to.  If you 
don't like the world as it is, you can have it simulate a better one.


And by the way, there's no more need for living organisms to make all 
this run, is there?  Brain scanning is easier if you don't have to keep 
the patient alive.  Don't worry, no data is lost.  At least no important 
data.  You don't really need all those low level sensory processing and 
motor skills you learned over a lifetime.  That was only useful when you 
still had your body.  And while were at it, we can alter your memories 
if you like.  Had a troubled childhood?  How about a new one?


Of course there are the other scenarios, where the SAI is not proven 
friendly, or humans don't cooperate...


Vinge describes the singularity as the end of the human era.  I think 
your nervousness is justified.
 
-- Matt Mahoney, [EMAIL PROTECTED]



- Original Message 
From: deering [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Thursday, October 26, 2006 7

Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Richard Loosemore


Curious.

A couple of days ago, I responded to demands that I produce arguments to 
justify the conclusion that there were ways to build a friendly AI that 
was extremely stable and trustworthy, but without having to give a 
mathematical proof of its friendliness.


Now, granted, the text was complex, technical, and not necessarily 
worded as best it could be.  But the background to this is that I am 
writing a long work on the foundations of cognitive science, and the 
ideas in that post were a condensed version of material that is spread 
out over several dense chapters in that book ... but even though that 
longer version is not ready, I finally gave in to the repeated (and 
sometimes shrill and abusive) demands that I produce at least some kind 
of summary of what is in those chapters.


But after all that complaining, I gave the first outline of an actual 
technique for guaranteeing Friendliness (not vague promises that a 
rigorous mathematical proof is urgently needed, and I promise I am 
working on it, but an actual method that can be developed into a 
complete solution), and the response was  nothing.


I presume this means everyone agrees with it, so this is a milestone of 
mutual accord in a hitherto divided community.


Progress!



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-26 Thread Richard Loosemore

Matt Mahoney wrote:

- Original Message 
From: Starglider [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Thursday, October 26, 2006 4:21:45 AM
Subject: Re: [singularity] Defining the Singularity


What I'm not sure about is that you gain anything from 'neural' or
'brainlike' elements at all. The brain should not be put on a pedestal.


I think you're right.  A good example is natural language.  Neural networks are 
poor at symbolic processing.  Humans process about 10^9 bits of information 
from language during a lifetime, which means the language areas of the brain 
must use thousands of synapses per bit.


Neural networks are *not* poor at symbolic processing:  you just used 
the one inside your head to do some symbolic processing.


And perhaps brains are so incredibly well designed, that they have 
enough synapses for thousands of times the number of bits that a 
language user typically sees in a lifetime, because they are using some 
of those other synapses to actually process the language, maybe?


Like, you know, rather than just use up all the available processing 
hardware to store language information and then realize that there was 
nothing left over to actually use the stored information  which is 
presumably what a novice AI programmer would do.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-25 Thread Richard Loosemore


Starglider wrote:

I have no wish to rehash the fairly futile and extremely disruptive
discussion of Loosemore's assertions that occurred on the SL4 mailing
list. I am willing to address the implicit questions/assumptions about my
own position.


You may not have noticed that at the end of my previous message I said:


If I am right, this is clearly an extremely important issue. For that
reason the pros and cons of the argument deserve to be treated with as
much ego-less discussion as possible. Let's hope that that happens on
the occasions that it is discussed, now and in the future. 


So what did you do?  You immediately went back to the same old, 
personal-abuse style of trying to win an argument, as exemplified by 
your opening paragraph, above, and by several similar statements below 
(for example: You appear to have zero understanding of the functional 
mechanisms involved in a 'rational/normative' AI system).


My new policy is to discuss issues only with people who can resist the 
temptation to behave like this.


For that reason, Michael, you're now killfiled.

If anyone else wants to discuss the issues, feel free.

Richard Loosemore.





Richard Loosemore wrote:

The contribution of complex systems science is not to send across a
whole body of plug-and-play theoretical work: they only need to send
across one idea (an empirical fact), and that is enough. This empirical 
idea is the notion of the disconnectedness of global from local behavior 
- what I have called the 'Global-Local Disconnect' and what, roughly 
speaking, Wolfram calls 'Computational Irreducibility'.


This is only an issue if you're using open-ended selective dynamics on
or in a substrate with softly-constrained, implicitly-constrained or
unconstrained side effects. Nailing that statement down precisely would
take a few more paragraphs of definition, but I'll skip that for now. The
point is that plenty of complex engineered systems, including almost all
existing software systems, don't have this property. The assertion that
it is possible (for humans) to design an AGI with fully explicit and
rigorous side effect control is contraversial and unproven; I'm optimistic
about it, but I'm not sure and I certainly wouldn't call it a fact. What
you failed to do was show that it is impossible, and indeed below you
seem to acknowledge that it may in fact be possible.

The assertion that it is more desirable to build an AGI with strong
structural constraints is more complicated. Eliezer Yudkowsky has
spent hundreds of thousands of words arguing fairly convincingly for
this, and I'm not going to revist that subject here.


It is entirely possible to build an AI in such a way that the general
 course of its behavior is as reliable as the behavior of an Ideal
Gas: can't predict the position and momentum of all its particles,
but you sure can predict such overall characteristics as temperature,
pressure and volume.


A highly transhuman intelligence could probably do this, though I
suspect it would be very inefficient, partially I expect you'd need
strong passive constraints on the power of local mechanisms (the kind
the brain has in abundance), which will always sacrifice performance
on many tasks compared to unconstrained or intelligently-verified
mechanisms. The chances of humans being able to do this are
pretty remote, much worse than the already not-promising chances
for doing constraint/logic-based FAI. Part of that is due to the fact that
while there are people making theoretical progress on constraint-based
analysis of AGI, all the suggestions for developing the essential theory
for this kind of FAI seem to involve running experiments on highly
dangerous proto-AGI or AGI systems (necessarily built before any
such theory can be developed and verified). Another problem is the
fact that people advocating this kind of approach usually don't
appreciate the difficult of designing a good set of FAI goals in the
first place, nor the difficulty of verifying that an AGI has a precisely
human-like motivational structure if they're going with the dubious
plan of hoping an enhanced-human-equivalent can steer humanity
through the Singularity successfully. Finally the most serious problem
is that an AGI of this type isn't capable of doing safe full-scale self
modification until it has full competence in applying all of this as yet
undeveloped emergent-FAI theory; unlike constraint-based FAI you
don't get any help from the basic substrate and the self-modification
competence doesn't grow with the main AI. Until both the abstract
knowledge of the reliable-emergent-goal-system-design and the
Friendly goal system to use it properly are fully in place (i.e. in all of
your prototypes) you're relying on adversarial methods to prevent
arbitary self-modification, hard takeoff and general bad news.

In short it's ridiculously risky and unlikely to work, orders of magnitude
more so than actively verified FAI on a rational AGI substrate, which is
already

Re: [singularity] Defining the Singularity

2006-10-24 Thread Richard Loosemore

Starglider wrote:

You know my position on 'complex systems science'; yet to do anything
 useful, unlikely to ever help in AGI, would create FAI-incompatible 
systems even if it could.


And you know my position is that this is completely wrong.  For the sake 
of those who do not know about this difference of approaches, here is a 
summary.


You are more or less correct to point out that 'complex systems
science' [has] yet to do anything useful - this is a little extremist,
and it contains a biassed criterion for 'useful', but in general I would
not want to waste my time arguing that 'complex systems science' has
produced a body of theoretical work that could be lifted up and imported
into AI research.

The trouble is, this is a red herring.

The contribution of complex systems science is not to send across a
whole body of plug-and-play theoretical work: they only need to send
across one idea (an empirical fact), and that is enough. This empirical 
idea is the notion of the disconnectedness of global from local behavior 
- what I have called the 'Global-Local Disconnect' and what, roughly 
speaking, Wolfram calls 'Computational Irreducibility'.


What many AI researchers cannot come to terms with is that something so
small and so simple could have such devastating implications for what
they do. It is very similar to Bertrand Russell turning up one day with
a tiny little paradox and wrecking Frege's life work.

As for complex systems ideas leading to the creation of FAI-incompatible
systems, this is exactly the opposite of the truth. Perhaps you missed
a comment that I made last week on the AGI list, regarding the relative
stability and predictability of different kinds of system:


It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal
Gas: can't predict the position and momentum of all its particles,
but you sure can predict such overall characteristics as temperature,
pressure and volume.


The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the 
likelihood of them becoming unfriendly would be similar to the 
likelihood of the molecules of an Ideal Gas suddenly deciding to split 
into two groups and head for opposite ends of their container. Yes, it's 
theoretically possible, but ...


And by contrast, the type of system that the Rational/Normative AI 
community want to build (with logically provable friendliness) is either 
never going to arrive, or will be as brittle as a house of cards: it 
will not degrade gracefully. For that reason, I believe that if/when you 
do get impatient and decide to forgo a definitive proof of friendliness, 
and push the START button on your AI, you will create something 
incredibly dangerous.


If I am right, this is clearly an extremely important issue. For that
reason the pros and cons of the argument deserve to be treated with as
much ego-less discussion as possible. Let's hope that that happens on
the occasions that it is discussed, now and in the future.


Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]