On 5/7/06, sanjay padmane <[EMAIL PROTECTED]> wrote:

On 5/7/06, Pei Wang <[EMAIL PROTECTED]> wrote:
>
> AI
> doesn't necessarily follow the same path as how human intelligence is
> produced, even though it is indeed the only path that has been proved
> to work so far.
>


IMO, if a machine achieves true intelligence via a system that is entirely
different from brain, it must, at some point in time, mirror a humanlike
intelligence, as one of its modules, in order to effectively communicate to
humans.

Agree. There are many practical reasons to do so. However, since an AI
won't have exactly human experience, it will not behave exactly as a
human. More importantly, to behave as a human is not a necessary
condition for a system to be considered as intelligent.

Such situation may arise if the AI grows out of an ideal seed and transforms
into something totally different from humans. But because humans are
responsible for its fabrication at this time and are using various short
cuts, taken from the knowledge of self, it is more likely that the AI will
be mostly humanlike in a crude way and will transform into a different kind
(possibly better), while keeping the human 'parts'.

The AI's knowledge base will surely have overlap with ours, though
won't be identical, even at the very beginning.

Pei

-Sanjay


 ________________________________
 To unsubscribe, change your address, or temporarily deactivate your
subscription, please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to