Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-02 Thread Colin Hales
Hello again Jesse,
I am going to assume that by trashing computationalism that Marc Geddes 
has enough ammo to vitiate Eleizer's various predilections so... to 
that end...

Your various comments (see below) have a common thread of the form I 
see no reason why you can't ..X.. So let's focus on reasons why you 
can't ...X.  These are numerous and visible in real - empirically 
verifiable physics...let's look at it from a dynamics point of view. In 
saying 'you can see no reason' would mean that if you chose a 
computationalist abstraction level (you mentioned atoms) that you would 
claim the resultant agent able to demonstrate scientific behaviour 
indistinguishable from a human.

I would claim that to be categorically false and testably so. OK. 
Firstly call the computationalist artificial scientist, COMP_S. Call the 
human scientist HUMAN_S. Call computationalism COMP. This saves a lot of 
writing! The test regime:

HUMAN_S constructs laws of nature tn using the human faculty for 
observation (call it P) delivered by real atoms in the brain of HUMAN_S. 
If COMP_S and HUMAN_S are to be indistinguishable then the state 
dynamics (state vector space) of COMP_S must be as sophisticated, 
accessible as HUMAN_S and ALSO /convergent on the same outcomes as those 
of HUMAN_S/. Our test is that they both converge on a law of nature tn, 
say. Note: tn is a abstracted statement of an underlying generalisation 
in respect of the distal external natural world (such as tn = ta, a 
model of an atom). Yes? That is what we do... the portability of laws of 
nature tn proves that we have rendered such abstractions invariant to 
the belief dynamics of any particular scientist.. Yes?

HUMAN_S constructs a model of atoms a 'law of nature' =  ta.  Using that 
model ta we then implement a sophisticated computational version of  
HUMAN_S at the level of the model: atoms. We assemble an atomic-level 
model replica of HUMAN_S. We run the computation on a host COMP 
substrate. This becomes our COMP_S. We expect the two to be identical to 
the extent of delivering indistinguishable scientific behaviour. We 
embody  COMP_S with IO as sophisticated as a human and wire it upIf  
the computationalist position holds, by definition, the dynamics of 
COMP_S must be (a) complex enough and (b) have access to sufficient 
disambiguated information to construct tn indistinguishably from HUMAN_S.

If computationalism is true then given the same circumstance of original 
knowledge paucity (which can be tested), A demand for a scientific 
outcome should result in state-vector dynamics adaptation resulting in 
the delivery of the same tn (also testable), which we demand shall be 
radically novel If they are really equivalent this should happen. 
This is the basic position (I don't want to write it out again!)


I would claim the state trajectory of COMP_S to be fatally impoverished 
by the model ta. (abstracted atoms). That is, the state-trajectory of 
COMP_S would fail to consistently converge on a new law of nature tn and 
would demonstrate instability (chaotic behaviour). Just like ungrounded 
power supplly voltage drift about, a symbolically  ungrounded COMP_S 
will epistemically drift about.

Indeed I would hold that would be the case no matter what the 
abstraction level: sub-atomic, sub-sub atomic , sub sub sub atomic 
.. etc ... the result would be identical. Remember: there's no such 
'thing' as atoms...these are an abstraction - of a particular level of 
the organisational hierarchy of nature.  also note ... so-called' 
ab-initio quantum mechanics of the entire HUMAN_S would also fail 
because QM is likewise just an abstraction of reality, not reality. COMP 
would claim that the laws of nature describing atoms behave identically 
to atoms. The model ensemble of ta atoms should be capable of expressing 
all the emergent properties of  an ensemble of real atoms. This already 
makes COMP a self-referential question-begging outcome.  HUMAN_S is our 
observer, made of real atoms. COMP assumes that P is delivered by 
computing ta when there is no such 'thing' as atoms! Atoms are an 
abstraction of a thing, not a thing. Furthermore, all the orighinal 
atoms of HUMAN_S have been replaced with the atoms of the COMP_S substrate.

What is NOT in law of nature ta is the relationship between the 
abstraction ta and all the other atoms in the distal world outside 
COMP_S. (beyond the IO boundary). Assume you supplied all the data about 
all the atoms in the environment of the original human HUMAN_S used to 
construct and initialise COMP_S. You know all these relationships at the 
moment you measured all the atoms in HUMAN_S to get you model 
established. However, after initialisation, when you run the COMP_S, all 
relationships of the model with the distal world (those intrinsic to the 
atoms which the model replaced) are GONE  the instant the 
abstraction happens, from that 

Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-02 Thread Brent Meeker

Colin Hales wrote:
 Hello again Jesse,
 I am going to assume that by trashing computationalism that Marc Geddes 
 has enough ammo to vitiate Eleizer's various predilections so... to 
 that end...
 
 Your various comments (see below) have a common thread of the form I 
 see no reason why you can't ..X.. So let's focus on reasons why you 
 can't ...X.  These are numerous and visible in real - empirically 
 verifiable physics...let's look at it from a dynamics point of view. In 
 saying 'you can see no reason' would mean that if you chose a 
 computationalist abstraction level (you mentioned atoms) that you would 
 claim the resultant agent able to demonstrate scientific behaviour 
 indistinguishable from a human.
 
 I would claim that to be categorically false and testably so. OK. 
 Firstly call the computationalist artificial scientist, COMP_S. Call the 
 human scientist HUMAN_S. Call computationalism COMP. This saves a lot of 
 writing! The test regime:
 
 HUMAN_S constructs laws of nature tn using the human faculty for 
 observation (call it P) delivered by real atoms in the brain of HUMAN_S. 
 If COMP_S and HUMAN_S are to be indistinguishable then the state 
 dynamics (state vector space) of COMP_S must be as sophisticated, 
 accessible as HUMAN_S and ALSO /convergent on the same outcomes as those 
 of HUMAN_S/. Our test is that they both converge on a law of nature tn, 
 say. Note: tn is a abstracted statement of an underlying generalisation 
 in respect of the distal external natural world (such as tn = ta, a 
 model of an atom). Yes? That is what we do... the portability of laws of 
 nature tn proves that we have rendered such abstractions invariant to 
 the belief dynamics of any particular scientist.. Yes?
 
 HUMAN_S constructs a model of atoms a 'law of nature' =  ta.  Using that 
 model ta we then implement a sophisticated computational version of  
 HUMAN_S at the level of the model: atoms. We assemble an atomic-level 
 model replica of HUMAN_S. We run the computation on a host COMP 
 substrate. This becomes our COMP_S. We expect the two to be identical to 
 the extent of delivering indistinguishable scientific behaviour. We 
 embody  COMP_S with IO as sophisticated as a human and wire it upIf  
 the computationalist position holds, by definition, the dynamics of 
 COMP_S must be (a) complex enough and (b) have access to sufficient 
 disambiguated information to construct tn indistinguishably from HUMAN_S.
 
 If computationalism is true then given the same circumstance of original 
 knowledge paucity (which can be tested), A demand for a scientific 
 outcome should result in state-vector dynamics adaptation resulting in 
 the delivery of the same tn (also testable), which we demand shall be 
 radically novel If they are really equivalent this should happen. 
 This is the basic position (I don't want to write it out again!)
 
 
 I would claim the state trajectory of COMP_S to be fatally impoverished 
 by the model ta. (abstracted atoms). That is, the state-trajectory of 
 COMP_S would fail to consistently converge on a new law of nature tn and 
 would demonstrate instability (chaotic behaviour). Just like ungrounded 
 power supplly voltage drift about, a symbolically  ungrounded COMP_S 
 will epistemically drift about.
 
 Indeed I would hold that would be the case no matter what the 
 abstraction level: sub-atomic, sub-sub atomic , sub sub sub atomic 
 .. etc ... the result would be identical. Remember: there's no such 
 'thing' as atoms...these are an abstraction - of a particular level of 
 the organisational hierarchy of nature.  also note ... so-called' 
 ab-initio quantum mechanics of the entire HUMAN_S would also fail 
 because QM is likewise just an abstraction of reality, not reality. COMP 
 would claim that the laws of nature describing atoms behave identically 
 to atoms. The model ensemble of ta atoms should be capable of expressing 
 all the emergent properties of  an ensemble of real atoms. This already 
 makes COMP a self-referential question-begging outcome.  HUMAN_S is our 
 observer, made of real atoms. COMP assumes that P is delivered by 
 computing ta when there is no such 'thing' as atoms! Atoms are an 
 abstraction of a thing, not a thing. Furthermore, all the orighinal 
 atoms of HUMAN_S have been replaced with the atoms of the COMP_S substrate.
 
 What is NOT in law of nature ta is the relationship between the 
 abstraction ta and all the other atoms in the distal world outside 
 COMP_S. (beyond the IO boundary). Assume you supplied all the data about 
 all the atoms in the environment of the original human HUMAN_S used to 
 construct and initialise COMP_S. You know all these relationships at the 
 moment you measured all the atoms in HUMAN_S to get you model 
 established. However, after initialisation, when you run the COMP_S, all 
 relationships of the model with the distal world (those intrinsic to 

Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-02 Thread marc . geddes



On Sep 2, 6:27 pm, Colin Hales [EMAIL PROTECTED] wrote:
 Hello again Jesse,
 I am going to assume that by trashing computationalism that Marc Geddes
 has enough ammo to vitiate Eleizer's various predilections so... to
 that end...

To make it clear, I'm not trashing computaionalism. I maintain that
comp is true (See what Bruno said).

It's Bayesianism I'm trashing.  And yes, I now have enough
'intellectual ammo' to crush Yudkowsky.

Cheers
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-01 Thread Colin Hales
Hi Marc,

*/Eliezer/*'s hubris about a Bayesian approach to intelligence is 
nothing more than the usual 'metabelief' about a mathematics... or about 
computation... meant in the sense that cognition is computation, where 
computation is done BY the universe (with the material of the universe 
used to manipulate abstract symbols) 
/search?hl=ensa=Xoi=spellresnum=0ct=resultcd=1q=Eliezer+Yudkowskyspell=1.

*You don't have to work so hard to walk away from that approach...*

Computationalism is FALSE in the sense that it cannot be used to 
construct a scientist.
A scientist deals with the UNKNOWN.
If you could compute a scientist you would already know everything! 
Science would be impossible.
So you *can* 'compute/simulate' a scientist, but if you could the 
science must already have been done... hence you wouldn't want to. 
Computationalism is FALSE in the sense of 'not useful', not false in the 
sense of 'wrong'.

You cannot model a modeller of the intrinsically unknown. As a 
computationalist  manipluator of abstract symbols you are required to 
deliver a model of how to learn - in which you must specify how all 
novelty shall be handled! In other words you can;t deal with the REAL 
unknown - where you have no such model!

 ie. a computationalist scientist is an oxymoron: a logical 
contradiction. If you say you can then you are question begging 
computationalism whilst failing to predict an a-priori unsupervised 
observer (a scientist).

The Bayesian 'given' (the conditional) assumes knowledge of a given 
which is a-priori not available. It assumes observation of the kind we 
have.. otherwise how would you know any options to choose as 
givens?. furthermore it assumes that if somehow we were to 
experiment to resolve a choice of 'givens' (Bayesian conditionals) as 
being the 'truth' - then there are potentially an enormous collection of 
'givens', all of which can be inserted in the same bayesian predictor... 
resulting  in degenerate knowledge you know NOTHING because you fail 
to resolve anything useful about the world outside. You don't even know 
there's an 'outside'.

The bayesian (all computationalist) approach fails to predict 
observation (in the sense of ANY observation/an observer, not a 
particular observation) and fails to predict the science that might 
result from an observer.

This is the achilles heel of the computationalist argument.

The computationalist delusion (dressed up in Bayesian or any other 
abstract symbol-manipulator's clothes) has to stop right here, right now 
and for good.

BTW This does not mean that 'cognition is not computation' I hold 
that cognition is NATURAL symbol manipulation, not ABSTRACT symbol 
manipulation. But that's a whole other story... The natural symbols are 
the key.

Please feel free to deliver the above to Eliezer. He'll remember me! 
Tell him the AGI he is so fearful of are a DOORSTOP and will be 
pathetically vulnerable to human intervention. The whole AGI 
fear-mongering realm needs to get over themselves and start being 
scientific about what they do. It's all based on assumptions which are 
false.

cheers,
colin



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-01 Thread Colin Hales
Hi!
Assumptions assumption assumptionstake a look: You said:

Why would you say that? Computer simulations can certainly produce 
results you didn't already know about, just look at genetic algorithms.

OK. here's the rub... /You didn't already know about.../.
Just exactly 'who' (the 'you') is 'knowing' in this statement?
You automatically put an external observer outside my statement.
*My observer is the knower.* *There is no other knower:* The scientist 
who gets to know is the person I am talking about! There's nobody else 
around who gets to decide what is known... you put that into my story 
where there is none. My story is of /unsupervised/ learning. Nobody else 
gets to choose Bayesian priors/givens. And nobody else is around to pass 
judgement... the result IS the knowledge. Tricky eh?

A genetic algorithm (that is, a specific kind of computationalist 
manipulation of abstract symbols) cannot be a scientist. Even the 'no 
free lunch' theorem, proves that without me adding anything but just 
to seal the lid on itI would defy any computationalist artefact 
based on abstract symbol manipulation to come up with a law of nature ...

... by law of nature I mean an ABSTRACTION about the distal natural 
world derived from a set of experiences of the distal natural world (NOT 
merely IO signals... these are NOT experienced). The IO is degenerately 
related to the distal natural world by the laws of physics... a 
computationalist IO system is fundamentally degenerately related to the 
distal natural world...so it doesn't even know what is 'out there' at 
all, let alone that there's a generalisation operating BEHIND it. A law 
of nature, to a genetic algorithm or any other 
abstract/computationalist beast... would merely predict IO behaviour at 
its sensory boundary. It may be brilliantly accurate! But that *IS NOT 
SCIENCE* because there's no verifiable deliverable to pass on...and it 
has nothing else to work with. An artefact based on this may survive in 
a habitat... but that is NOT science.

Sothere's no scientist here. (BTW IO = input/output).
cheers,
colin


Jesse Mazer wrote:
 Colin Hales wrote:

   
 Computationalism is FALSE in the sense that it cannot be used to construct a 
 scientist.
 A scientist deals with the UNKNOWN.
 If you could compute a scientist you would already know everything! Science 
 would be impossible.
 So you can 'compute/simulate' a scientist, but if you could the science must 
 already have been done... 
 

 Why would you say that? Computer simulations can certainly produce results 
 you didn't already know about, just look at genetic algorithms.
 
   

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: The Super-Intelligence (SI) speaks: An imaginary dialogue

2008-09-01 Thread marc . geddes



On Sep 2, 1:56 pm, Colin Hales [EMAIL PROTECTED] wrote:
 Hi Marc,

 */Eliezer/*'s hubris about a Bayesian approach to intelligence is
 nothing more than the usual 'metabelief' about a mathematics... or about
 computation... meant in the sense that cognition is computation, where
 computation is done BY the universe (with the material of the universe
 used to manipulate abstract symbols)
 /search?hl=ensa=Xoi=spellresnum=0ct=resultcd=1q=Eliezer+Yudkowskysp­ell=1.

 *You don't have to work so hard to walk away from that approach...*


Hi Colin,

The chess computer 'Deep Blue' was computational, and could play chess
better than the (then) chess world champion, Gary Kasparov.  But that
didn't mean that the programmers understood all the chess, or all the
chess had already been played.  So I don't think your argument is a
good one.  You can't rebut Yudkowsy's approach as easily as that ;)

But I kind of understand your sentiment, and agree that science can't
(and shouldn't be) reduced to mere Bayesian probability shuffling.
There are aesthetic judgements involved in science, and I don't think
any precise mathematical definition of these aesthetic notions is
possible, as Bruno has already opined.Yudkowsky's excessive faith
in Bayesian Induction is definitely his weakness.  But that doesn't
mean we can't make a computational super-intelligence.

Cheers


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---