Bruno,

> I think that comp might imply that simple virgin (non programmed) universal
> (and immaterial) machine are already conscious. Perhaps even maximally
> conscious.

This sounds like a comp variant of panpsychism (platopsychism?)... in
which consciousness is axiomatically proposed as a property of
arithmetic.  Are you saying that comp would require such an axiom?  If
so, why?

On Wed, Jun 15, 2011 at 9:56 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Then adding induction gives them Löbianity, and this makes them
> self-conscious (which might already be a delusion of some sort).

I'm not sure how an unprogrammed, immaterial universal machine could
be self-conscious, since self-consciousness requires the rudimentary
distinction of self versus other. What is the 'other' against which
this virgin universal machine would be distinguishing itself against?

> Unfortunately the hard task is to interface such (self)-consciousness with
> our probable realities (computational histories). This is what we can hardly
> be sure about.

Perhaps I'm just confused about your ideas - wouldn't be the first
time! - but this seems to suffer from the same problem as panpsychism
- that although asserting consciousness as a property of the universe
sidesteps cartesian dualism, we are still left without an explanation
of why human consciousness differs from ant consciousness differs from
rock consciousness.  In your case, we are left wondering how the
consciousness of the virgin universal machine "interfaces" with
specific universal numbers, and what would explain the differences in
consciousness among them.

That's why I favor the idea that consciousness arises from certain
kinds of cybernetic (autopoeitic) organization (which is consistent
with comp). In fact I think it is still consistent with much of what
you're saying... but it is your assertion that comp denies strong AI
that implies you would find fault with that idea.

> I still don't know if the brain is just a filter of consciousness, in which
> case losing neurons might enhance consciousness (and some data in
> neurophysiology might confirm this). I think Goertzel is more creating a
> competent machine than an intelligent one, from what I have read about it. I
> oppose intelligence/consciousness and competence/ingenuity. The first is
> needed to develop the later, but the later has a negative feedback on the
> first.

I think I understand your point here with regard to consciousness -
given that you're saying it's a property of the platonic 'virgin'
universal machine. But if you assert that about intelligence, aren't
you saying that intelligence isn't computable (i.e. comp denies strong
ai)?  This would seem to contradict Marcus Hutter's AIXI.  You're
saying that our intelligence as humans is dependent (in the same way
as consciousness) on the fact that we don't know which machine we are?
 That creativity is sourced in subjective indeterminacy?

Terren

> Bruno
>
> On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> Hi Colin,
>
> On 07 Jun 2011, at 09:42, Colin Hales wrote:
>
> Hi,
>
> Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
>
> International Journal of Machine Consciousness vol. 3, no. 1, 2011.
>
> 1-35.
>
> http://dx.doi.org/10.1142/S1793843011000613
>
>
> The paper has finally been published. Phew what an epic!
>
>
> Congratulation Colin.
>
> Like others,  I don't succeed in getting it, neither at home nor at the
>
> university.
>
> From the abstract I am afraid you might not have taken into account our
>
> (many) conversations. Most of what you say about the impossibility of
>
> building an artificial scientist is provably correct in the (weak) comp
>
> theory.  It is unfortunate that you derive this from comp+materialism,
>
> which
>
> is inconsistent. Actually, comp prevents "artificial intelligence". This
>
> does not prevent the existence, and even the apparition, of intelligent
>
> machines. But this might happen *despite* humans, instead of 'thanks to
>
> the
>
> humans'. This is related with the fact that we cannot know which machine
>
> we
>
> are ourselves. Yet, we can make copy at some level (in which case we
>
> don't
>
> know what we are really creating or recreating, and then, also,
>
> descendent
>
> of bugs in regular programs can evolve. Or we can get them
>
> serendipitously.
>
>  It is also relate to the fact that we don't *want* intelligent machine,
>
> which is really a computer who will choose its user, if ... he want one.
>
> We
>
> prefer them to be slaves. It will take time before we recognize them
>
> (apparently).
>
> Of course the 'naturalist comp' theory is inconsistent. Not sure you take
>
> that into account too.
>
> Artificial intelligence will always be more mike fishing or exploring
>
> spaces, and we might *discover* strange creatures. Arithmetical truth is
>
> a
>
> universal zoo. Well, no, it is really a jungle. We don't know what is in
>
> there. We can only scratch a tiny bit of it.
>
> Now, let us distinguish two things, which are very different:
>
> 1) intelligence-consciousness-free-will-emotion
>
> and
>
> 2) cleverness-competence-ingenuity-gifted-learning-ability
>
> "1)" is necessary for the developpment of "2)", but "2)" has a negative
>
> feedback on "1)".
>
> I have already given on this list what I call the smallest theory of
>
> intelligence.
>
> By definition a machine is intelligent if it is not stupid. And a machine
>
> can be stupid for two reason:
>
> she believes that she is intelligent, or
>
> she believes that she is stupid.
>
> Of course, this is arithmetized immediately in a weakening of G, the
>
> theory
>
> C having as axioms the modal normal axioms and rules + Dp -> ~BDp. So Dt
>
> (arithmetical consistency) can play the role of intelligence, and Bf
>
> (inconsistance) plays the role of stupidity. G* and G proves BDt -> Bf
>
> and
>
> G* proves BBf -> Bf (but not G!).
>
> This illustrates that "1)" above might come from Löbianity, and "2)"
>
> above
>
> (the scientist) is governed by theoretical artificial intelligence (Case
>
> and
>
> Smith, Oherson, Stob, Weinstein). Here the results are not just
>
> NON-constructive, but are *necessarily* so. Cleverness is just something
>
> that we cannot program. But we can prove, non constructively, the
>
> existence
>
> of powerful learning machine. We just cannot recognize them, or build
>
> them.
>
> It is like with the algorithmically random strings, we cannot generate
>
> them
>
> by a short algorithm, but we can generate all of them by a very short
>
> algorithm.
>
> So, concerning intelligence/consciousness (as opposed to cleverness), I
>
> think we have passed the "singularity". Nothing is more
>
> intelligent/conscious than a virgin universal machine. By programming it,
>
> we
>
> can only make his "soul" fell, and, in the worst case, we might get
>
> something as stupid as human, capable of feeling itself superior, for
>
> example.
>
> Bruno
>
>
>
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
>
> You received this message because you are subscribed to the Google Groups
>
> "Everything List" group.
>
> To post to this group, send email to everything-list@googlegroups.com.
>
> To unsubscribe from this group, send email to
>
> everything-list+unsubscr...@googlegroups.com.
>
> For more options, visit this group at
>
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
> --
>
> You received this message because you are subscribed to the Google Groups
>
> "Everything List" group.
>
> To post to this group, send email to everything-list@googlegroups.com.
>
> To unsubscribe from this group, send email to
>
> everything-list+unsubscr...@googlegroups.com.
>
> For more options, visit this group at
>
> http://groups.google.com/group/everything-list?hl=en.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
>
> You received this message because you are subscribed to the Google Groups
>
> "Everything List" group.
>
> To post to this group, send email to everything-list@googlegroups.com.
>
> To unsubscribe from this group, send email to
>
> everything-list+unsubscr...@googlegroups.com.
>
> For more options, visit this group at
>
> http://groups.google.com/group/everything-list?hl=en.
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to