Thanks for the reply Bruno, comments below...

On Tue, Jun 14, 2011 at 9:53 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> doesn't that imply the possibility
>> of an artificial intelligence?
>
> In a weak sense of Artificial Intelligence, yes. In a strong sense, no.
>
> If you are duplicated at the right substitution level, few would say that
> "you" have become an "artificial intelligence". It would be a case of the
> good old natural intelligence, but with new clothes.

Sure, but the distinction between artificial and natural intelligence
is not that important assuming comp. The point is simply that if I can
be simulated (which I agree requires some faith), that implies that
intelligence does not require biology (or any other particular
"physical" substrate), that strong artificial intelligence is possible
in principle, ignoring for the moment the question of whether we can
provably construct it.

> In fact, if we are machine, we cannot know which machine we are, and that is
> why you need some luck when saying "yes" to a doctor who will build a copy
> of you/your-body, at some level of description of your body.
>
> This is an old result. Already in 1922,  Emil Post, who discovered "Church
> thesis" ten years before Church and Turing (and others) realized that the
> "Gödelian argument" against Mechanism (that Post discovered and refuted 30
> years before Lucas, and 60 years before Penrose), when corrected, shows only
> that a machine cannot build a machine with equivalent qualification to its
> own qualification (for example with equivalent provability power in
> arithmetic)  *in a provable way*. I have refered to this, in this list,
> under the name of "Benacerraf principle", who rediscovered this later.
>
> We just cannot do artificial intelligence in a provable manner. We need
> chance, or luck. Even if we get some intelligent machine, we will not
> know-it-for sure (perhaps just believe it correctly).

Doesn't this objection only apply to attempts to construct an AI with
human-equivalent intelligence?  As a counter example I'm thinking here
of Ben Goertzel's OpenCog, an attempt at artificial general
intelligence (AGI), whose design is informed by a theory of
intelligence that does not attempt to mirror or model human
intelligence. In light of the "Benacerraf principle", isn't it
possible in principle to provably construct AIs so long as we're not
trying to emulate or model human intelligence?

Terren

>
> Bruno
>
>
>
>> On Thu, Jun 9, 2011 at 4:53 PM, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>>
>>> Hi Colin,
>>>
>>> On 07 Jun 2011, at 09:42, Colin Hales wrote:
>>>
>>>> Hi,
>>>>
>>>> Hales, C. G. 'On the Status of Computationalism as a Law of Nature',
>>>> International Journal of Machine Consciousness vol. 3, no. 1, 2011.
>>>> 1-35.
>>>>
>>>> http://dx.doi.org/10.1142/S1793843011000613
>>>>
>>>>
>>>> The paper has finally been published. Phew what an epic!
>>>
>>>
>>> Congratulation Colin.
>>>
>>> Like others,  I don't succeed in getting it, neither at home nor at the
>>> university.
>>>
>>> From the abstract I am afraid you might not have taken into account our
>>> (many) conversations. Most of what you say about the impossibility of
>>> building an artificial scientist is provably correct in the (weak) comp
>>> theory.  It is unfortunate that you derive this from comp+materialism,
>>> which
>>> is inconsistent. Actually, comp prevents "artificial intelligence". This
>>> does not prevent the existence, and even the apparition, of intelligent
>>> machines. But this might happen *despite* humans, instead of 'thanks to
>>> the
>>> humans'. This is related with the fact that we cannot know which machine
>>> we
>>> are ourselves. Yet, we can make copy at some level (in which case we
>>> don't
>>> know what we are really creating or recreating, and then, also,
>>> descendent
>>> of bugs in regular programs can evolve. Or we can get them
>>> serendipitously.
>>>  It is also relate to the fact that we don't *want* intelligent machine,
>>> which is really a computer who will choose its user, if ... he want one.
>>> We
>>> prefer them to be slaves. It will take time before we recognize them
>>> (apparently).
>>> Of course the 'naturalist comp' theory is inconsistent. Not sure you take
>>> that into account too.
>>>
>>> Artificial intelligence will always be more mike fishing or exploring
>>> spaces, and we might *discover* strange creatures. Arithmetical truth is
>>> a
>>> universal zoo. Well, no, it is really a jungle. We don't know what is in
>>> there. We can only scratch a tiny bit of it.
>>>
>>> Now, let us distinguish two things, which are very different:
>>>
>>> 1) intelligence-consciousness-free-will-emotion
>>>
>>> and
>>>
>>> 2) cleverness-competence-ingenuity-gifted-learning-ability
>>>
>>> "1)" is necessary for the developpment of "2)", but "2)" has a negative
>>> feedback on "1)".
>>>
>>> I have already given on this list what I call the smallest theory of
>>> intelligence.
>>>
>>> By definition a machine is intelligent if it is not stupid. And a machine
>>> can be stupid for two reason:
>>> she believes that she is intelligent, or
>>> she believes that she is stupid.
>>>
>>> Of course, this is arithmetized immediately in a weakening of G, the
>>> theory
>>> C having as axioms the modal normal axioms and rules + Dp -> ~BDp. So Dt
>>> (arithmetical consistency) can play the role of intelligence, and Bf
>>> (inconsistance) plays the role of stupidity. G* and G proves BDt -> Bf
>>> and
>>> G* proves BBf -> Bf (but not G!).
>>>
>>> This illustrates that "1)" above might come from Löbianity, and "2)"
>>> above
>>> (the scientist) is governed by theoretical artificial intelligence (Case
>>> and
>>> Smith, Oherson, Stob, Weinstein). Here the results are not just
>>> NON-constructive, but are *necessarily* so. Cleverness is just something
>>> that we cannot program. But we can prove, non constructively, the
>>> existence
>>> of powerful learning machine. We just cannot recognize them, or build
>>> them.
>>> It is like with the algorithmically random strings, we cannot generate
>>> them
>>> by a short algorithm, but we can generate all of them by a very short
>>> algorithm.
>>>
>>> So, concerning intelligence/consciousness (as opposed to cleverness), I
>>> think we have passed the "singularity". Nothing is more
>>> intelligent/conscious than a virgin universal machine. By programming it,
>>> we
>>> can only make his "soul" fell, and, in the worst case, we might get
>>> something as stupid as human, capable of feeling itself superior, for
>>> example.
>>>
>>> Bruno
>>>
>>>
>>>
>>>
>>>
>>> http://iridia.ulb.ac.be/~marchal/
>>>
>>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Everything List" group.
>>> To post to this group, send email to everything-list@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> everything-list+unsubscr...@googlegroups.com.
>>> For more options, visit this group at
>>> http://groups.google.com/group/everything-list?hl=en.
>>>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To post to this group, send email to everything-list@googlegroups.com.
>> To unsubscribe from this group, send email to
>> everything-list+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>> http://groups.google.com/group/everything-list?hl=en.
>>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to