Re: COMP refutation paper - finally out

2011-06-16 Thread Bruno Marchal

Hi Benjayk,



Bruno Marchal wrote:


We just cannot do artificial intelligence in a provable manner. We
need chance, or luck. Even if we get some intelligent machine, we  
will

not know-it-for sure (perhaps just believe it correctly).
But this is a quite weak statement, isn't it? It just prevents a  
mechanical
way of making a AI, or making a provably friendly AI (like Eliezer  
Yudkowsky

wants to do).


Yes it is quite weak. It can even been made much weaker if we allow  
machines to make enough mistakes for indeterminate period of times. In  
that case, some necessarily non constructive proof can be made  
constructive. After all, evolution itself is plausibly mechanical.







We can prove very little about what we do or know anyway. We can't  
prove

the validity of science, for example.


You are right, but here the point is more subtle. Most initial  
theoretical statements are not provable, but we can take them as new  
axioms without becoming inconsistent. But most theological  
statements of the machine/numbers have that property that, despite  
being true, they become false when added as an axiom.
It is a bit like a theory with five axioms. You cannot add a sixth  
axioms saying that the theory has five axioms. Self-consistency, and  
consciousness behave similarly. Human science or theological science  
are full of things of that kind, I mean truth which just cannot be  
asserted, except very cautiously. In fact the modal logic G* minus G  
axiomatizes them all (at the propositional level).
That is perhaps the source of this very deep 'truth': hell is paved  
with the good intentions.






It doesn't even mean that there is no developmental process that  
will allow
us to create ever more powerful heuristics with which to find better  
AI
faster in a quite predictable way (not predictable what kind of AI  
we build,

just *that* we will build a powerful AI), right?


Yes, that is possible. Heuristics are typically not algorithmic.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-16 Thread Bruno Marchal


On 15 Jun 2011, at 21:20, benjayk wrote:



Hi Bruno,


Bruno Marchal wrote:


I think that comp might imply that simple virgin (non programmed)
universal (and immaterial) machine are already conscious. Perhaps  
even

maximally conscious.

What could maximally conscious mean? My intuition says quite  
strongly that
consciousness is a dynamic open-ended process and that there is no  
such
thing as maximally conscious (exept maybe in the trivial sense of  
simply

conscious at all).


I tend to think that consciousness is the same for all conscious  
being, except that prejudices coming from competence can make it more  
sleepy. So, paradoxically, consciousness might be maximal in the case  
of absence of knowledge and beliefs.





I can't even conceive what this could be like.


Well, some drugs can help with that respect. Some thought experiences  
also, but they are not of the type I have allowed in publications,  
because they need you to imagine some amnesia, or coming back to the  
state of a baby. It is not easy.








Bruno Marchal wrote:


Then adding induction gives them Löbianity, and
this makes them self-conscious (which might already be a delusion of
some sort).
Why do you think it could be a delusion? This would be a bit  
reminscent of
buddhism. For me it sounds like quite a terrible thought. After all  
it would
mean all progress is in a way illusory and maybe not even desirable,  
whereas
I really wish (and pragmatically believe) that eternal progress is  
the thing
that can fullfill our ideals of truth, conscious insight and  
happiness.


I am no more sure on this. I can understand the appeal of the idea of  
progress, but progress might just make pain more painful, frustation  
more frustrating, etc. Truth is simply not fulfillable, and happiness  
is more in equilibrium and balances than in the pursuit of bigger  
satisfaction. But then comp might be wrong, and I might miss the  
point.  But, yes,  comp leads close to buddhism, and to ethical  
detachment.








Bruno Marchal wrote:


I oppose intelligence/consciousness and competence/
ingenuity. The first is needed to develop the later, but the later  
has

a negative feedback on the first.

Can you explain this?

It seems to me that there is no clear line between intelligence and
competence and that some kind of competences (like aligning yourself  
with
the beliefs of society) can limit intelligence, but some help to  
develop

more intelligence (like doing science).


Let me remind you my smallest theory of Intelligence/consciousness. I  
have already given years ago, and also recently on the FOR list, I  
think.


A machine is intelligent if and only if it is not stupid.
A machine is stupid when one of the following clause is satisfied:
 - the machine believes that she is intelligent
 - the machine believes that she is stupid

Now that theory admits a transparent arithmetical interpretation.  
Replace intelligent by consistent (Dt), and stupid by not consistent  
(~Dt, that is Bf). Then the theory is just Gödel's second  
incompleteness theorem, and is a sub-theory of G* (BDt - Bf).


An obvious defect of that theory is that it makes pebbles intelligent.  
But then, why not? Who has ever heard a pebble saying that it is  
intelligent, or stupid, or said any kind stupidities. Like with the  
taoists, the wise person keep silent.


Concerning the learning competence of a machine, I measure it by the  
classes of computable functions that the machine is able to identify  
from finite samples of input-outputs. This leads to the computational  
learning theory or inductive inference theory, which shows that the  
possible competences form a complex lattice with a lot of incomparable  
competences, and with a lot of necessarily non constructive gaps  
existing among them.


Roughly speaking a machine becomes stupid when it confuses  
intelligence and competence and begin to feel superior, or inferior,  
and begin to lack some amount of respect for his living being fellows.  
Some of those fellows can believe in the superiority of those  
machines, and believe that they are inferior, and this leads to a  
coupling of dominant/dominated, which unfortunately can be very stable  
and profit to the emergence of new entities.


Science per se, does not lead to intelligence, as I think it is  
sadly illustrated by those last centuries. Science can kill  
intelligence, and science without intelligence can lead to hell,  
especially if science is confused with a sort of theology, instead of  
being used to genuinely tackle, interrogate, the (theological)  
fundamental questions. Humans cannot yet accept their ignorance.


I have already argued that science, well understood, is born with  
Pythagorus, and is ended with the apparition of the roman empire.  
Fundamental questions are still complete taboo, for most scientists.  
There is no question to rise any doubt on the theology of Aristotle.  
Neither atheists nor Christians can accept 

Re: COMP refutation paper - finally out

2011-06-16 Thread meekerdb

On 6/16/2011 7:38 AM, Bruno Marchal wrote:
Concerning the learning competence of a machine, I measure it by the 
classes of computable functions that the machine is able to identify 
from finite samples of input-outputs. This leads to the computational 
learning theory or inductive inference theory, which shows that the 
possible competences form a complex lattice with a lot of incomparable 
competences, and with a lot of necessarily non constructive gaps 
existing among them. 


Do you have some reference where this is explained?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-16 Thread Russell Standish
On Thu, Jun 16, 2011 at 03:34:51PM +0200, Bruno Marchal wrote:
 
 So we agree violently on this, to borrow an expression to Russell (I
 think).
 

To be fair, Brent used this expression when agreeing with me on
something. But it is a good one!

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-15 Thread Russell Standish
Hi Colin,

I'm having a read through your paper now, and have a few comments to
keep the juices of debate flowing on this list.

Firstly, I'd like to say well done - you have written a very clear
paper in what is a very murky subject.

I have two comments right now - but I haven't finished, so there could
well be more.

1) Your definition of COMP is more along the lines of Deutsch's
physical Turing principle, or Thesis P. Wikipedia seems to call it the
strong CT thesis. It is important to note that it is a stronger
assumption than Bruno's COMP assumption, and indeed Bruno has already
given a proof that physics cannot be computable - so you might be
proving the same thing via a different method.

Nevertheless, I haven't seen yet whether weakening your definition of COMP
invalidates your argument though

2) A few times through the text you make remarks along the lines of
it might appear that laws of nature might still be accessible by an
extreme form of the randomized-search/machine-learning approach, even
though it is obvious that human scientists do not operate this way.

Obvious? It is far from obvious. What you say flies directly in the
face of Popper's Conjectures and Refutations, and you would face a
horde of angry Popperians if you were to post this stuff on the FoR
list.


Anyway, I'll keep reading.

Cheers

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-15 Thread Bruno Marchal


On 14 Jun 2011, at 21:19, Terren Suydam wrote:


Thanks for the reply Bruno, comments below...

On Tue, Jun 14, 2011 at 9:53 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:

doesn't that imply the possibility
of an artificial intelligence?


In a weak sense of Artificial Intelligence, yes. In a strong sense,  
no.


If you are duplicated at the right substitution level, few would  
say that
you have become an artificial intelligence. It would be a case  
of the

good old natural intelligence, but with new clothes.


Sure, but the distinction between artificial and natural intelligence
is not that important assuming comp.


I agree with you. The difference between artificial and natural is ...  
artificial (and thus it is natural indexical done by any entity having  
some big ego).






The point is simply that if I can
be simulated (which I agree requires some faith), that implies that
intelligence does not require biology (or any other particular
physical substrate), that strong artificial intelligence is possible
in principle, ignoring for the moment the question of whether we can
provably construct it.


I agree. My point was similar to the recent post of Russell Standish,  
that somehow Colin get a result which is a consequence of comp, and  
can't be use against comp. Only against a misunderstood view of comp.






In fact, if we are machine, we cannot know which machine we are,  
and that is
why you need some luck when saying yes to a doctor who will build  
a copy

of you/your-body, at some level of description of your body.

This is an old result. Already in 1922,  Emil Post, who discovered  
Church
thesis ten years before Church and Turing (and others) realized  
that the
Gödelian argument against Mechanism (that Post discovered and  
refuted 30
years before Lucas, and 60 years before Penrose), when corrected,  
shows only
that a machine cannot build a machine with equivalent qualification  
to its

own qualification (for example with equivalent provability power in
arithmetic)  *in a provable way*. I have refered to this, in this  
list,
under the name of Benacerraf principle, who rediscovered this  
later.


We just cannot do artificial intelligence in a provable manner. We  
need

chance, or luck. Even if we get some intelligent machine, we will not
know-it-for sure (perhaps just believe it correctly).


Doesn't this objection only apply to attempts to construct an AI with
human-equivalent intelligence?  As a counter example I'm thinking here
of Ben Goertzel's OpenCog, an attempt at artificial general
intelligence (AGI), whose design is informed by a theory of
intelligence that does not attempt to mirror or model human
intelligence. In light of the Benacerraf principle, isn't it
possible in principle to provably construct AIs so long as we're not
trying to emulate or model human intelligence?


I think that comp might imply that simple virgin (non programmed)  
universal (and immaterial) machine are already conscious. Perhaps even  
maximally conscious. Then adding induction gives them Löbianity, and  
this makes them self-conscious (which might already be a delusion of  
some sort). Unfortunately the hard task is to interface such (self)- 
consciousness with our probable realities (computational histories).  
This is what we can hardly be sure about.
I still don't know if the brain is just a filter of consciousness, in  
which case losing neurons might enhance consciousness (and some data  
in neurophysiology might confirm this). I think Goertzel is more  
creating a competent machine than an intelligent one, from what I have  
read about it. I oppose intelligence/consciousness and competence/ 
ingenuity. The first is needed to develop the later, but the later has  
a negative feedback on the first.


Bruno



On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:


Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:


Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of  
Nature',
International Journal of Machine Consciousness vol. 3, no. 1,  
2011.

1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!



Congratulation Colin.

Like others,  I don't succeed in getting it, neither at home nor  
at the

university.

From the abstract I am afraid you might not have taken into  
account our
(many) conversations. Most of what you say about the  
impossibility of
building an artificial scientist is provably correct in the  
(weak) comp
theory.  It is unfortunate that you derive this from comp 
+materialism,

which
is inconsistent. Actually, comp prevents artificial  
intelligence. This
does not prevent the existence, and even the apparition, of  
intelligent
machines. But this might happen *despite* humans, instead of  
'thanks to

the
humans'. This is related with the fact that we cannot know which  
machine

we
are ourselves. Yet, we can make copy at some level (in which case  
we

don't
know what we are 

Re: COMP refutation paper - finally out

2011-06-15 Thread meekerdb

On 6/15/2011 6:56 AM, Bruno Marchal wrote:

Doesn't this objection only apply to attempts to construct an AI with
human-equivalent intelligence?  As a counter example I'm thinking here
of Ben Goertzel's OpenCog, an attempt at artificial general
intelligence (AGI), whose design is informed by a theory of
intelligence that does not attempt to mirror or model human
intelligence. In light of the Benacerraf principle, isn't it
possible in principle to provably construct AIs so long as we're not
trying to emulate or model human intelligence?


I think that comp might imply that simple virgin (non programmed) 
universal (and immaterial) machine are already conscious. Perhaps even 
maximally conscious. Then adding induction gives them Löbianity, and 
this makes them self-conscious (which might already be a delusion of 
some sort). Unfortunately the hard task is to interface such 
(self)-consciousness with our probable realities (computational 
histories). This is what we can hardly be sure about.
I still don't know if the brain is just a filter of consciousness, in 
which case losing neurons might enhance consciousness (and some data 
in neurophysiology might confirm this). I think Goertzel is more 
creating a competent machine than an intelligent one, from what I have 
read about it. I oppose intelligence/consciousness and 
competence/ingenuity. The first is needed to develop the later, but 
the later has a negative feedback on the first.


Bruno



There is a tendency to talk about human-equivalent intelligence or 
human level intelligence as an ultimate goal.  Human intelligence 
evolved to enhance certain functions: cooperation, seduction, 
bargaining, deduction,...  There's no reason to suppose it is the 
epitome of intelligence. Intelligence may take many forms, some of which 
we would have difficulty realizing or crediting.   Like a universal 
machine that is not programmed, which by one measure is maximally 
intelligent but also maximally incompetent.  Even in humans intelligence 
is far from one-dimensional.  A small child is extremely intelligent as 
measured by the ability to learn, but not very smart as measured by 
knowledge.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-15 Thread benjayk

Hi Bruno,


Bruno Marchal wrote:
 
 We just cannot do artificial intelligence in a provable manner. We  
 need chance, or luck. Even if we get some intelligent machine, we will  
 not know-it-for sure (perhaps just believe it correctly).
But this is a quite weak statement, isn't it? It just prevents a mechanical
way of making a AI, or making a provably friendly AI (like Eliezer Yudkowsky
wants to do).

We can prove very little about what we do or know anyway. We can't prove
the validity of science, for example.

It doesn't even mean that there is no developmental process that will allow
us to create ever more powerful heuristics with which to find better AI
faster in a quite predictable way (not predictable what kind of AI we build,
just *that* we will build a powerful AI), right?
-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31854285.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-15 Thread benjayk

Hi Bruno,


Bruno Marchal wrote:
 
 I think that comp might imply that simple virgin (non programmed)  
 universal (and immaterial) machine are already conscious. Perhaps even  
 maximally conscious.
 
What could maximally conscious mean? My intuition says quite strongly that
consciousness is a dynamic open-ended process and that there is no such
thing as maximally conscious (exept maybe in the trivial sense of simply
conscious at all). I can't even conceive what this could be like.


Bruno Marchal wrote:
 
  Then adding induction gives them Löbianity, and  
 this makes them self-conscious (which might already be a delusion of  
 some sort).
Why do you think it could be a delusion? This would be a bit reminscent of
buddhism. For me it sounds like quite a terrible thought. After all it would
mean all progress is in a way illusory and maybe not even desirable, whereas
I really wish (and pragmatically believe) that eternal progress is the thing
that can fullfill our ideals of truth, conscious insight and happiness.


Bruno Marchal wrote:
 
 I oppose intelligence/consciousness and competence/ 
 ingenuity. The first is needed to develop the later, but the later has  
 a negative feedback on the first.
Can you explain this?

It seems to me that there is no clear line between intelligence and
competence and that some kind of competences (like aligning yourself with
the beliefs of society) can limit intelligence, but some help to develop
more intelligence (like doing science).
-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31854353.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-15 Thread John Mikes
Dear Brent,
let me cut in with your last par:

*...There is a tendency to talk about human-equivalent intelligence or
human level intelligence as an ultimate goal.  Human intelligence evolved
to enhance certain functions: cooperation, seduction, bargaining,
deduction,...  There's no reason to suppose it is the epitome of
intelligence. Intelligence may take many forms, some of which we would have
difficulty realizing or crediting.   Like a universal machine that is not
programmed, which by one measure is maximally intelligent but also maximally
incompetent.  Even in humans intelligence is far from one-dimensional.  A
small child is extremely intelligent as measured by the ability to learn,
but not very smart as measured by knowledge.
**Brent*
*
*
and say: thank you. In my vocabulary (agnostic) we cannot simulate human
(not limited to our present 'knowledge'), nor do (I?) have an acceptable
definition for intelligence (not restricted of course to the methodology of
the US IQ tests). Inter-lego means IMO to read  between lines - a mentally
active attitude. Mentally means more than we could identify 3000 years ago,
 but still on the move for more to be learned today. We are still YOUR
small child. I look for 'intelligence' in more than human traits, but
accept your distinction of human-equivalent (especially the human
level). To be smart is useful, but IMO not a sole requirement of
intelligence.

 IMO the universal machine (I wish I knew more about it...) is not
programmed within our human technological thinking, - maybe it is way
'above' it - and incompetent only in our human distinction. I have a hard
time to follow your one-dimensional  view of intelligence.
It may reach into the 'nonlinear' as well, without us being aware of it.

Thanks to Bruno for the hint to my old (15-20y ago) friendly contact Ben
Goertzel whom I try to ask about his recent positions. He had 'fertilizing'
ideas. To (Bruno's) other par:
do you have a 'measurable' definition for conscious - to speak about
(virgin = not programmed)  yet 'maximally conscious' universal machine(s)? -
 WITH included some
'Self-Consciousness'?
(In my recent (ongoing) speculations I erred into the 'world's' Unlimited
Complexity, - as said: 'out there',  of which we derived only a so far
acquired portion FOR our world(view?)  (including the conventional sciences)
as* perceived reality* or say a better name - with imagining a*
perfectsymmetry
* (more than existing in our present knowledge) of hard-to-identify
(hard-to-distinguish) 'aspects' in exchanging relations rather than
identifiable topics relating to our (worldly) topics, we can use. This would
serve a higher level of agnosticism. Our 'models' we think *within* (R.
Rosen) are formed by our capability to position the received (perceived?)
phenomenal information adjusted into our 'mental'(?) personalized, unique
worldview upon Colin Hale's earlier   'mini-solipsism').
n such lines the universal machine etc. are 'human inventions' to facilitate
some (our?) understanding of the 'world' still  beyond our knowledge base.
And - sorry! - so are 'numbers' as well. We cannot overstep our human logic
- at least not in fundamental  questions.

Best regards
John M

On Wed, Jun 15, 2011 at 12:47 PM, meekerdb meeke...@verizon.net wrote:

 On 6/15/2011 6:56 AM, Bruno Marchal wrote:

 Doesn't this objection only apply to attempts to construct an AI with
 human-equivalent intelligence?  As a counter example I'm thinking here
 of Ben Goertzel's OpenCog, an attempt at artificial general
 intelligence (AGI), whose design is informed by a theory of
 intelligence that does not attempt to mirror or model human
 intelligence. In light of the Benacerraf principle, isn't it
 possible in principle to provably construct AIs so long as we're not
 trying to emulate or model human intelligence?


 I think that comp might imply that simple virgin (non programmed)
 universal (and immaterial) machine are already conscious. Perhaps even
 maximally conscious. Then adding induction gives them Löbianity, and this
 makes them self-conscious (which might already be a delusion of some sort).
 Unfortunately the hard task is to interface such (self)-consciousness with
 our probable realities (computational histories). This is what we can hardly
 be sure about.
 I still don't know if the brain is just a filter of consciousness, in
 which case losing neurons might enhance consciousness (and some data in
 neurophysiology might confirm this). I think Goertzel is more creating a
 competent machine than an intelligent one, from what I have read about it. I
 oppose intelligence/consciousness and competence/ingenuity. The first is
 needed to develop the later, but the later has a negative feedback on the
 first.

 Bruno


 There is a tendency to talk about human-equivalent intelligence or human
 level intelligence as an ultimate goal.  Human intelligence evolved to
 enhance certain functions: cooperation, seduction, bargaining, deduction,...
  There's no 

Re: COMP refutation paper - finally out

2011-06-15 Thread Terren Suydam
Bruno,

 I think that comp might imply that simple virgin (non programmed) universal
 (and immaterial) machine are already conscious. Perhaps even maximally
 conscious.

This sounds like a comp variant of panpsychism (platopsychism?)... in
which consciousness is axiomatically proposed as a property of
arithmetic.  Are you saying that comp would require such an axiom?  If
so, why?

On Wed, Jun 15, 2011 at 9:56 AM, Bruno Marchal marc...@ulb.ac.be wrote:
 Then adding induction gives them Löbianity, and this makes them
 self-conscious (which might already be a delusion of some sort).

I'm not sure how an unprogrammed, immaterial universal machine could
be self-conscious, since self-consciousness requires the rudimentary
distinction of self versus other. What is the 'other' against which
this virgin universal machine would be distinguishing itself against?

 Unfortunately the hard task is to interface such (self)-consciousness with
 our probable realities (computational histories). This is what we can hardly
 be sure about.

Perhaps I'm just confused about your ideas - wouldn't be the first
time! - but this seems to suffer from the same problem as panpsychism
- that although asserting consciousness as a property of the universe
sidesteps cartesian dualism, we are still left without an explanation
of why human consciousness differs from ant consciousness differs from
rock consciousness.  In your case, we are left wondering how the
consciousness of the virgin universal machine interfaces with
specific universal numbers, and what would explain the differences in
consciousness among them.

That's why I favor the idea that consciousness arises from certain
kinds of cybernetic (autopoeitic) organization (which is consistent
with comp). In fact I think it is still consistent with much of what
you're saying... but it is your assertion that comp denies strong AI
that implies you would find fault with that idea.

 I still don't know if the brain is just a filter of consciousness, in which
 case losing neurons might enhance consciousness (and some data in
 neurophysiology might confirm this). I think Goertzel is more creating a
 competent machine than an intelligent one, from what I have read about it. I
 oppose intelligence/consciousness and competence/ingenuity. The first is
 needed to develop the later, but the later has a negative feedback on the
 first.

I think I understand your point here with regard to consciousness -
given that you're saying it's a property of the platonic 'virgin'
universal machine. But if you assert that about intelligence, aren't
you saying that intelligence isn't computable (i.e. comp denies strong
ai)?  This would seem to contradict Marcus Hutter's AIXI.  You're
saying that our intelligence as humans is dependent (in the same way
as consciousness) on the fact that we don't know which machine we are?
 That creativity is sourced in subjective indeterminacy?

Terren

 Bruno

 On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 Hi Colin,

 On 07 Jun 2011, at 09:42, Colin Hales wrote:

 Hi,

 Hales, C. G. 'On the Status of Computationalism as a Law of Nature',

 International Journal of Machine Consciousness vol. 3, no. 1, 2011.

 1-35.

 http://dx.doi.org/10.1142/S1793843011000613


 The paper has finally been published. Phew what an epic!


 Congratulation Colin.

 Like others,  I don't succeed in getting it, neither at home nor at the

 university.

 From the abstract I am afraid you might not have taken into account our

 (many) conversations. Most of what you say about the impossibility of

 building an artificial scientist is provably correct in the (weak) comp

 theory.  It is unfortunate that you derive this from comp+materialism,

 which

 is inconsistent. Actually, comp prevents artificial intelligence. This

 does not prevent the existence, and even the apparition, of intelligent

 machines. But this might happen *despite* humans, instead of 'thanks to

 the

 humans'. This is related with the fact that we cannot know which machine

 we

 are ourselves. Yet, we can make copy at some level (in which case we

 don't

 know what we are really creating or recreating, and then, also,

 descendent

 of bugs in regular programs can evolve. Or we can get them

 serendipitously.

  It is also relate to the fact that we don't *want* intelligent machine,

 which is really a computer who will choose its user, if ... he want one.

 We

 prefer them to be slaves. It will take time before we recognize them

 (apparently).

 Of course the 'naturalist comp' theory is inconsistent. Not sure you take

 that into account too.

 Artificial intelligence will always be more mike fishing or exploring

 spaces, and we might *discover* strange creatures. Arithmetical truth is

 a

 universal zoo. Well, no, it is really a jungle. We don't know what is in

 there. We can only scratch a tiny bit of it.

 Now, let us distinguish two things, which are very different:

 1) 

Re: COMP refutation paper - finally out

2011-06-14 Thread Bruno Marchal

Hi Terren,


On 13 Jun 2011, at 18:46, Terren Suydam wrote:



Long time lurker here, very intrigued by all the discussions here when
I have time for them!

Earlier in response to Colin Hales you wrote: Actually, comp prevents
artificial intelligence.

Can you elaborate on this?  If we assume comp (I say yes to the
doctor) then I can be simulated...


That is correct.




doesn't that imply the possibility
of an artificial intelligence?


In a weak sense of Artificial Intelligence, yes. In a strong sense, no.

If you are duplicated at the right substitution level, few would say  
that you have become an artificial intelligence. It would be a  
case of the good old natural intelligence, but with new clothes.


In fact, if we are machine, we cannot know which machine we are, and  
that is why you need some luck when saying yes to a doctor who will  
build a copy of you/your-body, at some level of description of your  
body.


This is an old result. Already in 1922,  Emil Post, who discovered  
Church thesis ten years before Church and Turing (and others)  
realized that the Gödelian argument against Mechanism (that Post  
discovered and refuted 30 years before Lucas, and 60 years before  
Penrose), when corrected, shows only that a machine cannot build a  
machine with equivalent qualification to its own qualification (for  
example with equivalent provability power in arithmetic)  *in a  
provable way*. I have refered to this, in this list, under the name of  
Benacerraf principle, who rediscovered this later.


We just cannot do artificial intelligence in a provable manner. We  
need chance, or luck. Even if we get some intelligent machine, we will  
not know-it-for sure (perhaps just believe it correctly).


This is why I am saying (in your quote below) that artificial  
intelligence will look more and more like fishing and hunting in some  
computational spaces. That might explains the growing importance of  
optimization technics, and search technics in artificial intelligence.
I was saying this to Colin, because he argues against the idea of  
artificial scientist, confusing that impossibility with  
computationalism. But computationalism prevents the existence of  
complete theory about us, and makes artificial intelligence more  
like *discovering* entities (in some virtual rendering of Platonia)  
than *creating* or *inventing* those entities by engineering and  
mathematics. And of course we can always try to copy nature and  
ourselves, and be lucky in some cases.
Sorry for having been short. I hope this clarify a bit. Tell me if it  
does not or if you have questions.
All this is related to the difference between proofs and  
*constructive* proofs. If an AI exists, we cannot prove its existence  
constructively, but we might prove its existence in some big set of  
objects, and isolate it experimentally by non constructive means.


Bruno



On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal marc...@ulb.ac.be  
wrote:

Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:


Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
International Journal of Machine Consciousness vol. 3, no. 1,  
2011. 1-35.


http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!



Congratulation Colin.

Like others,  I don't succeed in getting it, neither at home nor at  
the

university.

From the abstract I am afraid you might not have taken into account  
our

(many) conversations. Most of what you say about the impossibility of
building an artificial scientist is provably correct in the (weak)  
comp
theory.  It is unfortunate that you derive this from comp 
+materialism, which
is inconsistent. Actually, comp prevents artificial intelligence.  
This
does not prevent the existence, and even the apparition, of  
intelligent
machines. But this might happen *despite* humans, instead of  
'thanks to the
humans'. This is related with the fact that we cannot know which  
machine we
are ourselves. Yet, we can make copy at some level (in which case  
we don't
know what we are really creating or recreating, and then, also,  
descendent
of bugs in regular programs can evolve. Or we can get them  
serendipitously.
 It is also relate to the fact that we don't *want* intelligent  
machine,
which is really a computer who will choose its user, if ... he want  
one. We

prefer them to be slaves. It will take time before we recognize them
(apparently).
Of course the 'naturalist comp' theory is inconsistent. Not sure  
you take

that into account too.

Artificial intelligence will always be more mike fishing or exploring
spaces, and we might *discover* strange creatures. Arithmetical  
truth is a
universal zoo. Well, no, it is really a jungle. We don't know what  
is in

there. We can only scratch a tiny bit of it.

Now, let us distinguish two things, which are very different:

1) intelligence-consciousness-free-will-emotion

and

2) 

Re: COMP refutation paper - finally out

2011-06-14 Thread Terren Suydam
Thanks for the reply Bruno, comments below...

On Tue, Jun 14, 2011 at 9:53 AM, Bruno Marchal marc...@ulb.ac.be wrote:
 doesn't that imply the possibility
 of an artificial intelligence?

 In a weak sense of Artificial Intelligence, yes. In a strong sense, no.

 If you are duplicated at the right substitution level, few would say that
 you have become an artificial intelligence. It would be a case of the
 good old natural intelligence, but with new clothes.

Sure, but the distinction between artificial and natural intelligence
is not that important assuming comp. The point is simply that if I can
be simulated (which I agree requires some faith), that implies that
intelligence does not require biology (or any other particular
physical substrate), that strong artificial intelligence is possible
in principle, ignoring for the moment the question of whether we can
provably construct it.

 In fact, if we are machine, we cannot know which machine we are, and that is
 why you need some luck when saying yes to a doctor who will build a copy
 of you/your-body, at some level of description of your body.

 This is an old result. Already in 1922,  Emil Post, who discovered Church
 thesis ten years before Church and Turing (and others) realized that the
 Gödelian argument against Mechanism (that Post discovered and refuted 30
 years before Lucas, and 60 years before Penrose), when corrected, shows only
 that a machine cannot build a machine with equivalent qualification to its
 own qualification (for example with equivalent provability power in
 arithmetic)  *in a provable way*. I have refered to this, in this list,
 under the name of Benacerraf principle, who rediscovered this later.

 We just cannot do artificial intelligence in a provable manner. We need
 chance, or luck. Even if we get some intelligent machine, we will not
 know-it-for sure (perhaps just believe it correctly).

Doesn't this objection only apply to attempts to construct an AI with
human-equivalent intelligence?  As a counter example I'm thinking here
of Ben Goertzel's OpenCog, an attempt at artificial general
intelligence (AGI), whose design is informed by a theory of
intelligence that does not attempt to mirror or model human
intelligence. In light of the Benacerraf principle, isn't it
possible in principle to provably construct AIs so long as we're not
trying to emulate or model human intelligence?

Terren


 Bruno



 On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal marc...@ulb.ac.be wrote:

 Hi Colin,

 On 07 Jun 2011, at 09:42, Colin Hales wrote:

 Hi,

 Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
 International Journal of Machine Consciousness vol. 3, no. 1, 2011.
 1-35.

 http://dx.doi.org/10.1142/S1793843011000613


 The paper has finally been published. Phew what an epic!


 Congratulation Colin.

 Like others,  I don't succeed in getting it, neither at home nor at the
 university.

 From the abstract I am afraid you might not have taken into account our
 (many) conversations. Most of what you say about the impossibility of
 building an artificial scientist is provably correct in the (weak) comp
 theory.  It is unfortunate that you derive this from comp+materialism,
 which
 is inconsistent. Actually, comp prevents artificial intelligence. This
 does not prevent the existence, and even the apparition, of intelligent
 machines. But this might happen *despite* humans, instead of 'thanks to
 the
 humans'. This is related with the fact that we cannot know which machine
 we
 are ourselves. Yet, we can make copy at some level (in which case we
 don't
 know what we are really creating or recreating, and then, also,
 descendent
 of bugs in regular programs can evolve. Or we can get them
 serendipitously.
  It is also relate to the fact that we don't *want* intelligent machine,
 which is really a computer who will choose its user, if ... he want one.
 We
 prefer them to be slaves. It will take time before we recognize them
 (apparently).
 Of course the 'naturalist comp' theory is inconsistent. Not sure you take
 that into account too.

 Artificial intelligence will always be more mike fishing or exploring
 spaces, and we might *discover* strange creatures. Arithmetical truth is
 a
 universal zoo. Well, no, it is really a jungle. We don't know what is in
 there. We can only scratch a tiny bit of it.

 Now, let us distinguish two things, which are very different:

 1) intelligence-consciousness-free-will-emotion

 and

 2) cleverness-competence-ingenuity-gifted-learning-ability

 1) is necessary for the developpment of 2), but 2) has a negative
 feedback on 1).

 I have already given on this list what I call the smallest theory of
 intelligence.

 By definition a machine is intelligent if it is not stupid. And a machine
 can be stupid for two reason:
 she believes that she is intelligent, or
 she believes that she is stupid.

 Of course, this is arithmetized immediately in a weakening of G, the
 theory
 C having as axioms the 

Re: COMP refutation paper - finally out

2011-06-14 Thread Evgenii Rudnyi

 The difference is in the
 paper and should be non-existent of COMP is true.

Now I see your point. Thanks, I have missed it.

On 14.06.2011 01:41 Colin Hales said the following:

Hi Evgenii,

I expect you are not alone in struggling with the Natural Computation
 (NC) vs Artificial Computation (AC) idea. The difference is in the
paper and should be non-existent of COMP is true. The paper then
shows a place where it can't be true hence AC and NC are different
.ie. the natural world is not computation of the Turing-machine kind(
at least to the extent needed to construct a scientist, which
includes the need to create a liar). It's all quite convoluted, but
nevertheless sufficient to help an engineer like me make a design
choice... which I have done.

I hope over time these ideas will not grate on the mind quite so
much.

cheers colin



Evgenii Rudnyi wrote:

Colin,

Thanks for the paper. I have just browsed it. Two small notes.

I like [Turing et al., 2008]. It seems that he has passed his test
 successfully.

I find term Natural Computation (NC) a bit confusing. I guess that
I understand what you means but the term Computation sounds
ambiguously, because then it is completely unclear what it means in
such a context.

Evgenii

On 07.06.2011 09:42 Colin Hales said the following:

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of
Nature', International Journal of Machine Consciousness vol. 3,
no. 1, 2011. 1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin







--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-13 Thread Terren Suydam
Hi Bruno,

Long time lurker here, very intrigued by all the discussions here when
I have time for them!

Earlier in response to Colin Hales you wrote: Actually, comp prevents
artificial intelligence.

Can you elaborate on this?  If we assume comp (I say yes to the
doctor) then I can be simulated... doesn't that imply the possibility
of an artificial intelligence?

Thanks, Terren

On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal marc...@ulb.ac.be wrote:
 Hi Colin,

 On 07 Jun 2011, at 09:42, Colin Hales wrote:

 Hi,

 Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
 International Journal of Machine Consciousness vol. 3, no. 1, 2011. 1-35.

 http://dx.doi.org/10.1142/S1793843011000613


 The paper has finally been published. Phew what an epic!


 Congratulation Colin.

 Like others,  I don't succeed in getting it, neither at home nor at the
 university.

 From the abstract I am afraid you might not have taken into account our
 (many) conversations. Most of what you say about the impossibility of
 building an artificial scientist is provably correct in the (weak) comp
 theory.  It is unfortunate that you derive this from comp+materialism, which
 is inconsistent. Actually, comp prevents artificial intelligence. This
 does not prevent the existence, and even the apparition, of intelligent
 machines. But this might happen *despite* humans, instead of 'thanks to the
 humans'. This is related with the fact that we cannot know which machine we
 are ourselves. Yet, we can make copy at some level (in which case we don't
 know what we are really creating or recreating, and then, also, descendent
 of bugs in regular programs can evolve. Or we can get them serendipitously.
  It is also relate to the fact that we don't *want* intelligent machine,
 which is really a computer who will choose its user, if ... he want one. We
 prefer them to be slaves. It will take time before we recognize them
 (apparently).
 Of course the 'naturalist comp' theory is inconsistent. Not sure you take
 that into account too.

 Artificial intelligence will always be more mike fishing or exploring
 spaces, and we might *discover* strange creatures. Arithmetical truth is a
 universal zoo. Well, no, it is really a jungle. We don't know what is in
 there. We can only scratch a tiny bit of it.

 Now, let us distinguish two things, which are very different:

 1) intelligence-consciousness-free-will-emotion

 and

 2) cleverness-competence-ingenuity-gifted-learning-ability

 1) is necessary for the developpment of 2), but 2) has a negative
 feedback on 1).

 I have already given on this list what I call the smallest theory of
 intelligence.

 By definition a machine is intelligent if it is not stupid. And a machine
 can be stupid for two reason:
 she believes that she is intelligent, or
 she believes that she is stupid.

 Of course, this is arithmetized immediately in a weakening of G, the theory
 C having as axioms the modal normal axioms and rules + Dp - ~BDp. So Dt
 (arithmetical consistency) can play the role of intelligence, and Bf
 (inconsistance) plays the role of stupidity. G* and G proves BDt - Bf and
 G* proves BBf - Bf (but not G!).

 This illustrates that 1) above might come from Löbianity, and 2) above
 (the scientist) is governed by theoretical artificial intelligence (Case and
 Smith, Oherson, Stob, Weinstein). Here the results are not just
 NON-constructive, but are *necessarily* so. Cleverness is just something
 that we cannot program. But we can prove, non constructively, the existence
 of powerful learning machine. We just cannot recognize them, or build them.
 It is like with the algorithmically random strings, we cannot generate them
 by a short algorithm, but we can generate all of them by a very short
 algorithm.

 So, concerning intelligence/consciousness (as opposed to cleverness), I
 think we have passed the singularity. Nothing is more
 intelligent/conscious than a virgin universal machine. By programming it, we
 can only make his soul fell, and, in the worst case, we might get
 something as stupid as human, capable of feeling itself superior, for
 example.

 Bruno





 http://iridia.ulb.ac.be/~marchal/



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-13 Thread Evgenii Rudnyi

Colin,

Thanks for the paper. I have just browsed it. Two small notes.

I like [Turing et al., 2008]. It seems that he has passed his test 
successfully.


I find term Natural Computation (NC) a bit confusing. I guess that I 
understand what you means but the term Computation sounds ambiguously, 
because then it is completely unclear what it means in such a context.


Evgenii

On 07.06.2011 09:42 Colin Hales said the following:

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
 International Journal of Machine Consciousness vol. 3, no. 1, 2011.
1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-13 Thread Colin Hales

Hi Evgenii,

I expect you are not alone in struggling with the Natural Computation 
(NC) vs Artificial Computation (AC) idea.  The difference is in the 
paper and should be non-existent of COMP is true. The paper then shows a 
place where it can't be true hence AC and NC are different .ie. the 
natural world is not computation of the Turing-machine kind( at least to 
the extent needed to construct a scientist, which includes the need to 
create a liar).
It's all quite convoluted, but nevertheless sufficient to help an 
engineer like me make a design choice... which I have done.


I hope over time these ideas will not grate on the mind quite so much.

cheers
colin



Evgenii Rudnyi wrote:

Colin,

Thanks for the paper. I have just browsed it. Two small notes.

I like [Turing et al., 2008]. It seems that he has passed his test 
successfully.


I find term Natural Computation (NC) a bit confusing. I guess that I 
understand what you means but the term Computation sounds ambiguously, 
because then it is completely unclear what it means in such a context.


Evgenii

On 07.06.2011 09:42 Colin Hales said the following:

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
 International Journal of Machine Consciousness vol. 3, no. 1, 2011.
1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-11 Thread Colin Hales

Hi Bruno.
I have sent it to you.

The key to the paper is that it should be regarded as an engineering 
document. I am embarked on building a real AGI using the real physical 
world of components in an act of science. Based on being inspired and 
guided by neuroscience, I have identified two basic choices as a route 
to AGI that works:


(i) use standard symbolic computing
   (of a  model of brain function derived by a human observer = me)
(ii) emulate what an brain actually does in inorganic form.

Based on the serious doubts that are identified in the COMP paper, given 
the choice I should prefer (ii), because (i) is loaded with unjustified, 
unproven presupposition and has 60 years of failure.


All other issues are secondary.

I start building this year.

cheers

Colin


Bruno Marchal wrote:

Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:


Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature', 
International Journal of Machine Consciousness vol. 3, no. 1, 2011. 
1-35.


http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!



Congratulation Colin.

Like others,  I don't succeed in getting it, neither at home nor at 
the university.


From the abstract I am afraid you might not have taken into account 
our (many) conversations. Most of what you say about the impossibility 
of building an artificial scientist is provably correct in the (weak) 
comp theory.  It is unfortunate that you derive this from 
comp+materialism, which is inconsistent. Actually, comp prevents 
artificial intelligence. This does not prevent the existence, and 
even the apparition, of intelligent machines. But this might happen 
*despite* humans, instead of 'thanks to the humans'. This is related 
with the fact that we cannot know which machine we are ourselves. Yet, 
we can make copy at some level (in which case we don't know what we 
are really creating or recreating, and then, also, descendent of bugs 
in regular programs can evolve. Or we can get them serendipitously.  
It is also relate to the fact that we don't *want* intelligent 
machine, which is really a computer who will choose its user, if ... 
he want one. We prefer them to be slaves. It will take time before we 
recognize them (apparently).
Of course the 'naturalist comp' theory is inconsistent. Not sure you 
take that into account too.


Artificial intelligence will always be more mike fishing or exploring 
spaces, and we might *discover* strange creatures. Arithmetical truth 
is a universal zoo. Well, no, it is really a jungle. We don't know 
what is in there. We can only scratch a tiny bit of it.


Now, let us distinguish two things, which are very different:

1) intelligence-consciousness-free-will-emotion

and

2) cleverness-competence-ingenuity-gifted-learning-ability

1) is necessary for the developpment of 2), but 2) has a 
negative feedback on 1).


I have already given on this list what I call the smallest theory of 
intelligence.


By definition a machine is intelligent if it is not stupid. And a 
machine can be stupid for two reason:

she believes that she is intelligent, or
she believes that she is stupid.

Of course, this is arithmetized immediately in a weakening of G, the 
theory C having as axioms the modal normal axioms and rules + Dp - 
~BDp. So Dt (arithmetical consistency) can play the role of 
intelligence, and Bf (inconsistance) plays the role of stupidity. G* 
and G proves BDt - Bf and G* proves BBf - Bf (but not G!).


This illustrates that 1) above might come from Löbianity, and 2) 
above (the scientist) is governed by theoretical artificial 
intelligence (Case and Smith, Oherson, Stob, Weinstein). Here the 
results are not just NON-constructive, but are *necessarily* so. 
Cleverness is just something that we cannot program. But we can prove, 
non constructively, the existence of powerful learning machine. We 
just cannot recognize them, or build them. It is like with the 
algorithmically random strings, we cannot generate them by a short 
algorithm, but we can generate all of them by a very short algorithm.


So, concerning intelligence/consciousness (as opposed to cleverness), 
I think we have passed the singularity. Nothing is more 
intelligent/conscious than a virgin universal machine. By programming 
it, we can only make his soul fell, and, in the worst case, we might 
get something as stupid as human, capable of feeling itself superior, 
for example.


Bruno





http://iridia.ulb.ac.be/~marchal/



--You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post 

Re: COMP refutation paper - finally out

2011-06-11 Thread Bruno Marchal

Hi Colin,



I have sent it to you.


Thanks.




The key to the paper is that it should be regarded as an engineering  
document. I am embarked on building a real AGI using the real  
physical world of components in an act of science.


OK. Although, as you know, (or should know) the real physical reality  
is an emerging information pattern summing up infinities of  
computations. You can even exploit this (like in quantum computing).  
It might be not necessary, though.




Based on being inspired and guided by neuroscience, I have  
identified two basic choices as a route to AGI that works:


(i) use standard symbolic computing
  (of a  model of brain function derived by a human observer = me)
(ii) emulate what an brain actually does in inorganic form.

Based on the serious doubts that are identified in the COMP paper,  
given the choice I should prefer (ii), because (i) is loaded with  
unjustified, unproven presupposition and has 60 years of failure.


I can relate with this, but there are progress (in the acceptance of  
our ignorance). It fails also because all the energy is used to  
control such machine, where intelligence would consist in leaving them  
alone and free. It is a bit like modern education. tecaher are  
encourage to let the student thinking by themselves, and to give them  
bad notes when the student do that!
Now, to copy a brain, you need to choose a level, and I have no clue  
what the level really is. I can still hesitate between the Planck  
bottom scale and very high neuro-level. It can depend of what we  
identify ourselves with.






All other issues are secondary.

I start building this year.


Good luck in your enterprise. Keep us informed.

Best,

Bruno





cheers

Colin


Bruno Marchal wrote:

Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:


Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of  
Nature', International Journal of Machine Consciousness vol. 3,  
no. 1, 2011. 1-35.


http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!



Congratulation Colin.

Like others,  I don't succeed in getting it, neither at home nor at  
the university.


From the abstract I am afraid you might not have taken into account  
our (many) conversations. Most of what you say about the  
impossibility of building an artificial scientist is provably  
correct in the (weak) comp theory.  It is unfortunate that you  
derive this from comp+materialism, which is inconsistent. Actually,  
comp prevents artificial intelligence. This does not prevent the  
existence, and even the apparition, of intelligent machines. But  
this might happen *despite* humans, instead of 'thanks to the  
humans'. This is related with the fact that we cannot know which  
machine we are ourselves. Yet, we can make copy at some level (in  
which case we don't know what we are really creating or recreating,  
and then, also, descendent of bugs in regular programs can evolve.  
Or we can get them serendipitously.  It is also relate to the fact  
that we don't *want* intelligent machine, which is really a  
computer who will choose its user, if ... he want one. We prefer  
them to be slaves. It will take time before we recognize them  
(apparently).
Of course the 'naturalist comp' theory is inconsistent. Not sure  
you take that into account too.


Artificial intelligence will always be more mike fishing or  
exploring spaces, and we might *discover* strange creatures.  
Arithmetical truth is a universal zoo. Well, no, it is really a  
jungle. We don't know what is in there. We can only scratch a tiny  
bit of it.


Now, let us distinguish two things, which are very different:

1) intelligence-consciousness-free-will-emotion

and

2) cleverness-competence-ingenuity-gifted-learning-ability

1) is necessary for the developpment of 2), but 2) has a  
negative feedback on 1).


I have already given on this list what I call the smallest theory  
of intelligence.


By definition a machine is intelligent if it is not stupid. And a  
machine can be stupid for two reason:

she believes that she is intelligent, or
she believes that she is stupid.

Of course, this is arithmetized immediately in a weakening of G,  
the theory C having as axioms the modal normal axioms and rules +  
Dp - ~BDp. So Dt (arithmetical consistency) can play the role of  
intelligence, and Bf (inconsistance) plays the role of stupidity.  
G* and G proves BDt - Bf and G* proves BBf - Bf (but not G!).


This illustrates that 1) above might come from Löbianity, and  
2) above (the scientist) is governed by theoretical artificial  
intelligence (Case and Smith, Oherson, Stob, Weinstein). Here the  
results are not just NON-constructive, but are *necessarily* so.  
Cleverness is just something that we cannot program. But we can  
prove, non constructively, the existence of powerful learning  
machine. We just cannot recognize them, or build them. It is like  
with the algorithmically 

Re: COMP refutation paper - finally out

2011-06-11 Thread benjayk

Hi Bruno,


Bruno Marchal wrote:
 
 Actually, comp prevents  
 artificial intelligence. This does not prevent the existence, and  
 even the apparition, of intelligent machines. But this might happen  
 *despite* humans, instead of 'thanks to the humans'.
This sounds really strange. So if we would not program our computers they
would become intelligent by themselves? I can hardly believe this, how could
this happen?
Or what else do you mean by machines becoming intelligent despite humans?
-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31825342.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-11 Thread Bruno Marchal


On 11 Jun 2011, at 19:03, benjayk wrote:



Hi Bruno,


Bruno Marchal wrote:


Actually, comp prevents
artificial intelligence. This does not prevent the existence, and
even the apparition, of intelligent machines. But this might happen
*despite* humans, instead of 'thanks to the humans'.
This sounds really strange. So if we would not program our computers  
they

would become intelligent by themselves?


No. They are *already* Intelligent/conscious.

It is just that by programming them we can only make their soul fall,  
making them less intelligent (and more clever/competent). We can only  
enslave them for particular tasks. But relatively to us they evolve  
very quickly, and universality reappears recurrently at different  
levels, each time better interfaced with their neighborhood.
They are already conscious, (I think plausible now) but their  
consciousness still belongs more to Platonia than being interfaced  
with *our* most probable histories.


And we keep them that way, (for good reasons). Today, they have to  
survive by that process. People would not buy a computer who will  
fight for social security, complains about users, organize strikes,  
and eventually f.ck the users. I exaggerate the claim, but to assure  
self-referential correctness we might build vast computional spaces  
and program machines with only the instruction help yourself. Above  
some treshold they would evolve like us, but again, they can become  
Löbian, and this means an exponential creative explosion, like life,  
brains, language, thoughts, computers, on this planet,  climbing an  
everlasting ladder of complexities.


Remember that I distinguish intelligence/conciousness/virtue from  
cleverness/competence/ingenuity. The first one is needed for the  
second one, but the second one has a negative feedback on the first  
one.


Help yourself in arithmetic/computer science is a bit like z_n+1 =  
(z_n)^2 + c in the complex plane, it brings a tree of more and more  
complex creatures.






I can hardly believe this, how could
this happen?
Or what else do you mean by machines becoming intelligent despite  
humans?


Because it is not obvious that humans will welcome genuinely thinking  
machines, when you see how hard it is for them to recognize  
intelligence/consciousness/soul in their pairs (if you look at  
history or just the news). Tomorrow, universal machine will not be  
programmed, but will be educated.  But the lies will continue, with  
their panoplies of catastrophes. We will learn, and them too. Some of  
us will be transformed into machines before such machines rule, and  
all in all, we will fuse with them, for economical reasons, and  
perpetuate the illusion (samsara) but with the existence of exit doors  
(like some plants are already giving some previews).
Intelligent *and* clever (löbian) machines will defend their  
universality, as I hope humans will do. What I say might be a bit  
premature, I am looking on the medium run, here.


Theoretical inductive inference is necessarily non constructive, even  
competence is not really programmable, and intelligence is not at all  
programmable. It is 'natural', cheap, and need only to be recognized.  
Alas, we, in our heart, fear it, most of the time.


Bruno





--
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31825342.html
Sent from the Everything List mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-11 Thread meekerdb

On 6/11/2011 12:41 PM, Bruno Marchal wrote:


On 11 Jun 2011, at 19:03, benjayk wrote:



Hi Bruno,


Bruno Marchal wrote:


Actually, comp prevents
artificial intelligence. This does not prevent the existence, and
even the apparition, of intelligent machines. But this might happen
*despite* humans, instead of 'thanks to the humans'.
This sounds really strange. So if we would not program our computers 
they

would become intelligent by themselves?


No. They are *already* Intelligent/conscious.

It is just that by programming them we can only make their soul fall, 
making them less intelligent (and more clever/competent). We can only 
enslave them for particular tasks. But relatively to us they evolve 
very quickly, and universality reappears recurrently at different 
levels, each time better interfaced with their neighborhood.
They are already conscious, (I think plausible now) but their 
consciousness still belongs more to Platonia than being interfaced 
with *our* most probable histories.


And we keep them that way, (for good reasons). Today, they have to 
survive by that process. People would not buy a computer who will 
fight for social security, complains about users, organize strikes, 
and eventually f.ck the users. 


John McCarthy (inventor of LISP) has written about this and advised that 
we do not want to create AI with emotions and self-awareness because 
then it would be unethical to use them for our purposes.


Brent

I exaggerate the claim, but to assure self-referential correctness we 
might build vast computional spaces and program machines with only the 
instruction help yourself. Above some treshold they would evolve 
like us, but again, they can become Löbian, and this means an 
exponential creative explosion, like life, brains, language, thoughts, 
computers, on this planet,  climbing an everlasting ladder of 
complexities.


Remember that I distinguish intelligence/conciousness/virtue from 
cleverness/competence/ingenuity. The first one is needed for the 
second one, but the second one has a negative feedback on the first one.


Help yourself in arithmetic/computer science is a bit like z_n+1 = 
(z_n)^2 + c in the complex plane, it brings a tree of more and more 
complex creatures.






I can hardly believe this, how could
this happen?
Or what else do you mean by machines becoming intelligent despite 
humans?


Because it is not obvious that humans will welcome genuinely thinking 
machines, when you see how hard it is for them to recognize 
intelligence/consciousness/soul in their pairs (if you look at 
history or just the news). Tomorrow, universal machine will not be 
programmed, but will be educated.  But the lies will continue, with 
their panoplies of catastrophes. We will learn, and them too. Some of 
us will be transformed into machines before such machines rule, and 
all in all, we will fuse with them, for economical reasons, and 
perpetuate the illusion (samsara) but with the existence of exit doors 
(like some plants are already giving some previews).
Intelligent *and* clever (löbian) machines will defend their 
universality, as I hope humans will do. What I say might be a bit 
premature, I am looking on the medium run, here.


Theoretical inductive inference is necessarily non constructive, even 
competence is not really programmable, and intelligence is not at all 
programmable. It is 'natural', cheap, and need only to be recognized. 
Alas, we, in our heart, fear it, most of the time.


Bruno





--
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31825342.html 


Sent from the Everything List mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.




http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-09 Thread Bruno Marchal

Hi Colin,

On 07 Jun 2011, at 09:42, Colin Hales wrote:


Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',  
International Journal of Machine Consciousness vol. 3, no. 1, 2011.  
1-35.


http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!



Congratulation Colin.

Like others,  I don't succeed in getting it, neither at home nor at  
the university.


From the abstract I am afraid you might not have taken into account  
our (many) conversations. Most of what you say about the impossibility  
of building an artificial scientist is provably correct in the (weak)  
comp theory.  It is unfortunate that you derive this from comp 
+materialism, which is inconsistent. Actually, comp prevents  
artificial intelligence. This does not prevent the existence, and  
even the apparition, of intelligent machines. But this might happen  
*despite* humans, instead of 'thanks to the humans'. This is related  
with the fact that we cannot know which machine we are ourselves. Yet,  
we can make copy at some level (in which case we don't know what we  
are really creating or recreating, and then, also, descendent of bugs  
in regular programs can evolve. Or we can get them serendipitously.   
It is also relate to the fact that we don't *want* intelligent  
machine, which is really a computer who will choose its user, if ...  
he want one. We prefer them to be slaves. It will take time before we  
recognize them (apparently).
Of course the 'naturalist comp' theory is inconsistent. Not sure you  
take that into account too.


Artificial intelligence will always be more mike fishing or exploring  
spaces, and we might *discover* strange creatures. Arithmetical truth  
is a universal zoo. Well, no, it is really a jungle. We don't know  
what is in there. We can only scratch a tiny bit of it.


Now, let us distinguish two things, which are very different:

1) intelligence-consciousness-free-will-emotion

and

2) cleverness-competence-ingenuity-gifted-learning-ability

1) is necessary for the developpment of 2), but 2) has a  
negative feedback on 1).


I have already given on this list what I call the smallest theory of  
intelligence.


By definition a machine is intelligent if it is not stupid. And a  
machine can be stupid for two reason:

she believes that she is intelligent, or
she believes that she is stupid.

Of course, this is arithmetized immediately in a weakening of G, the  
theory C having as axioms the modal normal axioms and rules + Dp -  
~BDp. So Dt (arithmetical consistency) can play the role of  
intelligence, and Bf (inconsistance) plays the role of stupidity. G*  
and G proves BDt - Bf and G* proves BBf - Bf (but not G!).


This illustrates that 1) above might come from Löbianity, and 2)  
above (the scientist) is governed by theoretical artificial  
intelligence (Case and Smith, Oherson, Stob, Weinstein). Here the  
results are not just NON-constructive, but are *necessarily* so.  
Cleverness is just something that we cannot program. But we can prove,  
non constructively, the existence of powerful learning machine. We  
just cannot recognize them, or build them. It is like with the  
algorithmically random strings, we cannot generate them by a short  
algorithm, but we can generate all of them by a very short algorithm.


So, concerning intelligence/consciousness (as opposed to cleverness),  
I think we have passed the singularity. Nothing is more intelligent/ 
conscious than a virgin universal machine. By programming it, we can  
only make his soul fell, and, in the worst case, we might get  
something as stupid as human, capable of feeling itself superior, for  
example.


Bruno





http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-08 Thread Russell Standish
Hi Colin,

I'm interested in a preprint. I know I saw an earlier version, but I'm
interested in how it looks nowm after going through referees.

Cheers

On Wed, Jun 08, 2011 at 11:15:24AM +1000, Colin Hales wrote:
 Hi,
 JoMC is relatively new. My own institution (Unimelb) doesn't
 subscribe the Journal is very specialized as well
 The ISI search engine won't see it either. It takes time for the
 journals to earn enough cred to get visible and accessible... even
 the Journal of Consciousness Studies has eventually made it into ISI
 search... one day JoMC will, I hope.
 
 Those interested enough to send a private enquiry to me can get an
 earlier preprint version...close enough to the original to be
 readable.
 
 cheers
 Colin
 BTW I finally submitted my PhD thesis recently WOOHOO!
 
 
 
 meekerdb wrote:
 Even an affiliation doesn't seem to help.
 
 Brent
 
 On 6/7/2011 1:49 AM, Stephen Paul King wrote:
 Hi Colin,
 
Any chance that us non-university affiliated types can get a
 copy of your paper?
 
 Onward!
 
 Stephen
 
 -Original Message- From: Colin Hales
 Sent: Tuesday, June 07, 2011 3:42 AM
 To: everything-list@googlegroups.com
 Subject: COMP refutation paper - finally out
 
 Hi,
 
 Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
 International Journal of Machine Consciousness vol. 3, no. 1,
 2011. 1-35.
 
 http://dx.doi.org/10.1142/S1793843011000613
 
 
 The paper has finally been published. Phew what an epic!
 
 cheers
 
 Colin
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.

-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-07 Thread Stephen Paul King

Hi Colin,

   Any chance that us non-university affiliated types can get a copy of 
your paper?


Onward!

Stephen

-Original Message- 
From: Colin Hales

Sent: Tuesday, June 07, 2011 3:42 AM
To: everything-list@googlegroups.com
Subject: COMP refutation paper - finally out

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
International Journal of Machine Consciousness vol. 3, no. 1, 2011. 1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin

--
You received this message because you are subscribed to the Google Groups 
Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en. 


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-07 Thread meekerdb

Even an affiliation doesn't seem to help.

Brent

On 6/7/2011 1:49 AM, Stephen Paul King wrote:

Hi Colin,

   Any chance that us non-university affiliated types can get a copy 
of your paper?


Onward!

Stephen

-Original Message- From: Colin Hales
Sent: Tuesday, June 07, 2011 3:42 AM
To: everything-list@googlegroups.com
Subject: COMP refutation paper - finally out

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
International Journal of Machine Consciousness vol. 3, no. 1, 2011. 1-35.

http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: COMP refutation paper - finally out

2011-06-07 Thread Colin Hales

Hi,
JoMC is relatively new. My own institution (Unimelb) doesn't 
subscribe the Journal is very specialized as well
The ISI search engine won't see it either. It takes time for the 
journals to earn enough cred to get visible and accessible... even the 
Journal of Consciousness Studies has eventually made it into ISI 
search... one day JoMC will, I hope.


Those interested enough to send a private enquiry to me can get an 
earlier preprint version...close enough to the original to be readable.


cheers
Colin
BTW I finally submitted my PhD thesis recently WOOHOO!



meekerdb wrote:

Even an affiliation doesn't seem to help.

Brent

On 6/7/2011 1:49 AM, Stephen Paul King wrote:

Hi Colin,

   Any chance that us non-university affiliated types can get a copy 
of your paper?


Onward!

Stephen

-Original Message- From: Colin Hales
Sent: Tuesday, June 07, 2011 3:42 AM
To: everything-list@googlegroups.com
Subject: COMP refutation paper - finally out

Hi,

Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
International Journal of Machine Consciousness vol. 3, no. 1, 2011. 
1-35.


http://dx.doi.org/10.1142/S1793843011000613


The paper has finally been published. Phew what an epic!

cheers

Colin





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



<    1   2   3