On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 5:05 AM, agrays...@gmail.com <javascript:> wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>>>
>>>>
>>>>
>>>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>>>
>>>> But what is the criterion when AI exceeds human intelligence? AG
>>>>
>>>>
>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>
>>>>
>>>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>>>> algebra and calculus better than me.  They play chess and go better 
>>>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>>>> more languages, and faster than I can.  They can take dictation better.  
>>>> They can write music better than me (since I'm not even competent).
>>>>
>>>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>>>
>>>> Brent
>>>>
>>>
>>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
>>> behavior and vastly exceed it in various functions. But what is 
>>> "intelligence"? AFAICT, undefined. AG
>>>
>>>
>>> When I took a series of courses in AI at UCLA in the '80s the professor 
>>> explained that artificial intelligence is whatever computers can't do yet.
>>>
>>> Brent
>>>
>>
>> Do you think there is anything about "consciousness" that distinguishes 
>> it from what a computer can eventually mimic? AG
>>
>>
>> I think a robot, i.e. a computer that can act in the world, can be 
>> conscious and to have human level general intelligence must be conscious, 
>> although perhaps in a somewhat different way than humans.
>>
>> Brent
>>
>
> Not made of flesh and blood, robot can't feel pain. 
>
>
> Why would you suppose that?
>
> Thus, behavior determined by pure logic; merciless. That's the danger. AG 
>
>
> Logic doesn't have any values; so pure logic is not motivated to do 
> anything.
>

*Without values, it can't be compassionate. It's like a human who enjoys a 
juicy hamburger, but has no thought of the pain of the cow who died to 
provide it. AG *

>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to