On 2/18/2018 7:48 PM, agrayson2...@gmail.com wrote:
On Sunday, February 18, 2018 at 8:37:44 PM UTC-7, Brent wrote:
On 2/18/2018 12:19 PM, agrays...@gmail.com <javascript:> wrote:
On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:
On 2/18/2018 5:05 AM, agrays...@gmail.com wrote:
On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:
On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
On Saturday, February 17, 2018 at 10:50:13 PM UTC-7,
Brent wrote:
On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
On Saturday, February 17, 2018 at 6:19:28 PM
UTC-7, Brent wrote:
On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
But what is the criterion when AI exceeds
human intelligence? AG
https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>
Intelligence is multi-dimensional. Computers
already do arithmetic and algebra and calculus
better than me. They play chess and go better
(although so far I beat the Chinese checkers
online :-) ). They translate more languages,
and faster than I can. They can take
dictation better. They can write music better
than me (since I'm not even competent).
So we need to sharpen the question. Exactly
*what* is 30yrs away?
Brent
Exactly! Remember "Blade Runner"? IMO, AI will
progressively MIMIC human behavior and vastly
exceed it in various functions. But what is
"intelligence"? AFAICT, undefined. AG
When I took a series of courses in AI at UCLA in
the '80s the professor explained that artificial
intelligence is whatever computers can't do yet.
Brent
Do you think there is anything about "consciousness"
that distinguishes it from what a computer can
eventually mimic? AG
I think a robot, i.e. a computer that can act in the
world, can be conscious and to have human level general
intelligence must be conscious, although perhaps in a
somewhat different way than humans.
Brent
Not made of flesh and blood, robot can't feel pain.
Why would you suppose that?
Thus, behavior determined by pure logic; merciless. That's
the danger. AG
Logic doesn't have any values; so pure logic is not motivated
to do anything.
*Without values, it can't be compassionate. *
Neither can it be passionate, or even interested, or even
motivated to do anything. Yet our Mars Rovers already do things.
You seem to be the poster boy for "Failure of Imagination".
Brent
*
*
*My former colleague at JPL sends commands to the Mars Rovers. They do
what they're told to do; nothing more, or less. AG
*
If the Rover is told to go to certain coordinates...but not what path to
take to avoid obstacles, then it must use intelligence. I know the JPL
doesn't not steer the Rover like an automobile. The time delay is too
great.
Brent
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com
<mailto:everything-list@googlegroups.com>.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.