Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread John Clark
On Sun, Feb 18, 2018 at 9:26 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​Computers such as AlphaGo have complex algorithms for taking the rules
> of a game like chess and running through long Markov chains of game events
> to increase their data base for playing the game.*
>

That won't work. No computer could examine every possible move in the game
GO because there are 2.09*10^170 of them and there are only 10^80 atoms in
the observable universe
​;​
and yet in just 24 hours it taught itself to be, not just a little better
but vastly better than an
​y​
human being
​ at the game​
, it easily beat the computer that beat the world's best human
​GO player
.

​And
 it wasn't a specialized program, it did the same thing with Chess and
sever other games.
​ The most amazing thing of all is that humans didn't teach it to do any of
this, it taught itself, all it started out knowing is which moves
were legal and which were not. That's it. ​

And besides, explaining why
​something​
 is smart does not make it one bit less smart.

>
> *​> ​There is not really anything about "knowing something" going on here.*
>

Call me crazy but I think word should have meaning. If you're right and the
computer does not "know something" then whatever "knowing something" means
(assuming it means anything at all) it has no virtue because human
​s​
, who "know something"
​, ​
behave stupider than a computer that "known nothing".


> ​> ​
> There is a lot of hype over AI these days, but I suspect a lot of this is
> meant to beguile people.
>

​You seem to believe that humans and the meat they are made of have some
special mystical something than computers and the microchips they are made
of can never have. I disagree, I think the idea of a soul is superstitious
nonsense.  ​


​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread agrayson2000


On Sunday, February 18, 2018 at 9:06:11 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 7:41 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 12:15 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>>
>>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish 
>>> wrote: 
>>>>
>>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>>> > 
>>>> > 
>>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>>> > > 
>>>> > > 
>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>  
>>>> > 
>>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>>> > 
>>>> > Brent 
>>>> > 
>>>>
>>>> According to the title (I haven't RTFA), it's the 
>>>> singularity. Starting from a point where a machine designs, 
>>>> and manufactures improved copies of itself, technology will supposedly 
>>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>>> hyperbolic, it reaches infinity within a finite period of time, 
>>>> expected to be a matter of months perhaps. 
>>>>
>>>> Given that we really don't understand creative processes (not even 
>>>> good old fashioned biological evolution is really well understood), 
>>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>>> extrapolating Moore's law, which is the easy part of technological 
>>>> change. 
>>>>
>>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>>> I ever end up having any. 
>>>>
>>>> Cheers 
>>>>
>>>
>>> One thing a computer can not do is ask a question. I can ask a question 
>>> and program a computer to help solve the problem. In fact I am doing a 
>>> program to do just this. I am working a computer program to model aspects 
>>> of gravitational memory. What the computer will not do, at least computers 
>>> we currently employ will not do is to ask the question and then work to 
>>> solve it. A computer can find a numerical solution or render something 
>>> numerically, but it does not spontaneously act to ask the question or to 
>>> propose something creative to then solve or render the solution.
>>>
>>>
>>> You must never have applied for a loan online.
>>>
>>
>> It can only do what it has been programmed to do. I can't act independent 
>> of its program, such as wondering if some theory makes sense, or coming up 
>> with tests of a theory. Or say, it can't invent chess, it can only play it 
>> better than humans. It can't "think" out of the box. AG
>>
>>
>> Yes, keep repeating that over and over.  Repitition makes a convincing 
>> argument...for some people.
>>
>> Brent
>>
>
>
> *What's your countervailing evidence? You want to think it can think, and 
> that's YOUR repetitious argument. AG *
>
>
>
> https://www.ted.com/talks/maurice_conti_the_incredible_inventions_of_intuitive_ai#t-184772
>
> Brent
>

*I viewed it. Very impressive what they can do. However, I'd be MORE 
impressed, indeed HUGELY impressed with the existence of consciousness, if 
without an algorithm explicitly programming it, the computer would REFUSE 
to do as commanded. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread 'Chris de Morsella' via Everything List

 
 
  On Mon, Feb 19, 2018 at 3:56 AM, Lawrence 
Crowell wrote:   On Sunday, February 18, 2018 
at 10:00:24 PM UTC-6, Brent wrote:
  
 
 On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
  
 Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase  their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI these 
days, but I suspect a lot of this is meant to beguile people. I do suspect in 
time we will interact with AI as if it were intelligent and conscious. The 
really  big changer though I think will be the neural-cyber interlink that will 
put brains as the primary internet nodes. 
 
 Why would you suppose that when electronics have a signal speed ten million 
times faster than neurons?  Presently neurons have an advantage in connection 
density and power dissipation; but I see no reason they can hold that advantage.
 
 Brent


I think it may come down to computers that obey the Church-Turing thesis, which 
is finite and bounded. Hofstadter's book Godel Escher Bach has a chapter Bloop, 
Floop, Gloop where the Bloop means bounded loop or a halting program on a 
Turing machine. Biology however is not Bloop, but is rather a web of processors 
that are more Floop, or free loop. The busy beaver algorithm is such a case, 
which grows in complexity with each step. The computation of many fractals is 
this as well, where the Mandelbrot set with each iteration on a certain scale 
needs refinement to another floating point precision and thus grows in huge 
complexity. These of course in practice halting because the programmer puts in 
by hand a stop. These are recursively enumerable, and their complement in a set 
theoretic sense are Godel loops or Gloop. For machines to have properties at 
least parallel to conscious behavior we really have to be running in at least 
Floop and maybe into Gloop.
LC
Not sure if this has been touched on in this thread but it seems to me that the 
emergent phenomenon of both self-awareness and consciousness depend on 
information hiding in some fundamental way. Both our self awareness and our 
conscious minds, which from our incomplete perspective seem to be innate and 
ever present (at least when we are awake) are themselves the emergent outcomes 
of a vast amount of neural networked activities that is exquisitely hidden from 
us. We are unaware of the Genesis of our own awareness. 
Evidence from MRI scans supports this conclusion that before we are aware of 
being aware of some objectively measurable external event, or before we 
experience having a thought, that the almost one hundred billion neurons 
crammed into our highly folded cortexual pizza pie stuffed inside our skulls 
have been very busy and chatty indeed. As the MRI scans indicate.
We are aware of being aware and we experience conscious existence, but the 
process by which both our conscious experience and our own awareness of being 
arises within our minds is largely hidden from us. I think it is a fair and 
reasonable question to ask: Is information hiding a necessary an integral 
aspect of processes through which self-awareness and consciousness arise?
In computer science the rather recent emergence of deep mind neural networks 
that are characterized by having many layers, of which only the input layer and 
output layer of neurons are directly measurable, while conversely the many 
other layers that are arrayed in the stack between them remain hidden offers 
some intriguing parallels that also seem to indicate a critical role for 
information hiding. The Google deep mind machine learned neural networks for 
image processing, for example, have 10 to 30 (or by now perhaps even more) 
stacked layers of artificial neurons, most of which are hidden.
Because of the non-linearity of the processes in play within these artificial 
deep stacks of layered artificial neurons it is difficult to really know in any 
definitive manner exactly what is going on. The outcomes from experimenting on 
the statistically trained (or in the vernacular, machine learned) models, by 
for example tweaking training parameters to experimentally see how doing so 
effects the resulting outcomes and by also subsequently forensically analyzing 
any generated logs & other telemetry are often surprisingly beautiful 
dreamscapes that are not reducible to a series of algorithmic steps applied by 
the many hidden layers to whatever input signals that have been fed to the 
input layer of neurons.
It seems to me that the emergence of consciousness & self awareness as well is 
exquisitely nonlinear in nature. And that this outcome characterized by being 
non-linear, itself depends on information hiding in order to be able to 
operate. Each successive layer in the stack is mostly unaware of the vast array 
of activities occurring on the layers beneath it.

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Brent Meeker



On 2/19/2018 3:56 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:

Computers such as AlphaGo have complex algorithms for taking the
rules of a game like chess and running through long Markov chains
of game events to increase their data base for playing the game.
There is not really anything about "knowing something" going on
here. There is a lot of hype over AI these days, but I suspect a
lot of this is meant to beguile people. I do suspect in time we
will interact with AI as if it were intelligent and conscious.
The really big changer though I think will be the neural-cyber
interlink that will put brains as the primary internet nodes.


Why would you suppose that when electronics have a signal speed
ten million times faster than neurons?  Presently neurons have an
advantage in connection density and power dissipation; but I see
no reason they can hold that advantage.

Brent


I think it may come down to computers that obey the Church-Turing 
thesis, which is finite and bounded. Hofstadter's book /Godel Escher 
Bach/ has a chapter Bloop, Floop, Gloop where the Bloop means bounded 
loop or a halting program on a Turing machine. Biology however is not 
Bloop, but is rather a web of processors that are more Floop, or free 
loop. The busy beaver algorithm is such a case, which grows in 
complexity with each step. The computation of many fractals is this as 
well, where the Mandelbrot set with each iteration on a certain scale 
needs refinement to another floating point precision and thus grows in 
huge complexity. These of course in practice halting because the 
programmer puts in by hand a stop. These are recursively enumerable, 
and their complement in a set theoretic sense are Godel loops or 
Gloop. For machines to have properties at least parallel to conscious 
behavior we really have to be running in at least Floop and maybe into 
Gloop.


But the complexity is bounded physically.  All these mathematical 
idealizations of computation assume some kind of infinity.  Since there 
are physical bounds the Church-Turing thesis will apply and all 
realizable computers compute the same recursively innumerable 
functions.  It's just that electronic ones can do it a lot faster, or 
looked at another way can be a lot bigger.


Brent




LC
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 12:56, Lawrence Crowell  
> wrote:
> 
> On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
> 
> 
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>> Computers such as AlphaGo have complex algorithms for taking the rules of a 
>> game like chess and running through long Markov chains of game events to 
>> increase their data base for playing the game. There is not really anything 
>> about "knowing something" going on here. There is a lot of hype over AI 
>> these days, but I suspect a lot of this is meant to beguile people. I do 
>> suspect in time we will interact with AI as if it were intelligent and 
>> conscious. The really big changer though I think will be the neural-cyber 
>> interlink that will put brains as the primary internet nodes.
> 
> Why would you suppose that when electronics have a signal speed ten million 
> times faster than neurons?  Presently neurons have an advantage in connection 
> density and power dissipation; but I see no reason they can hold that 
> advantage.
> 
> Brent
> 
> I think it may come down to computers that obey the Church-Turing thesis, 
> which is finite and bounded.

The machines are finite, but they are supposed to be in a not bounded space and 
time environment.




> Hofstadter's book Godel Escher Bach has a chapter Bloop, Floop, Gloop where 
> the Bloop means bounded loop or a halting program on a Turing machine.

Bounded loop prevent the machine to be universal. An halting oracle makes the 
machine more powerful than a universal machine, but still obeying the same 
machine theology. Universal machine should be in the largest class (Gloop I 
presume). 



> Biology however is not Bloop, but is rather a web of processors that are more 
> Floop, or free loop.

It is gloop. Or we would been unable to talk about the universal machines.



> The busy beaver algorithm is such a case, which grows in complexity with each 
> step. The computation of many fractals is this as well, where the Mandelbrot 
> set with each iteration on a certain scale needs refinement to another 
> floating point precision and thus grows in huge complexity. These of course 
> in practice halting because the programmer puts in by hand a stop.

Assuming the programmer is not lost in a loop. No universal entity is immune 
against this.



> These are recursively enumerable, and their complement in a set theoretic 
> sense are Godel loops or Gloop.

? Universal = creative set in the sense of post: it means recursively 
enumerable with a complement which is not (but is transfinitely enumerable in 
some sense). The complement is not a machine at all.




> For machines to have properties at least parallel to conscious behavior we 
> really have to be running in at least Floop and maybe into Gloop.

Universality is enough, and Löbianity is enough to be self-conscious like us.

Bruno



> 
> LC
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 04:41, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:
> 
> 
> On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:
>> 
>> 
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>> > >  
>>> > > <https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>
>>> > >  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>> 
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>> 
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological change. 
>>> 
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>> 
>>> Cheers 
>>> 
>>> One thing a computer can not do is ask a question. I can ask a question and 
>>> program a computer to help solve the problem. In fact I am doing a program 
>>> to do just this. I am working a computer program to model aspects of 
>>> gravitational memory. What the computer will not do, at least computers we 
>>> currently employ will not do is to ask the question and then work to solve 
>>> it. A computer can find a numerical solution or render something 
>>> numerically, but it does not spontaneously act to ask the question or to 
>>> propose something creative to then solve or render the solution.
>> 
>> You must never have applied for a loan online.
>> 
>> It can only do what it has been programmed to do. I can't act independent of 
>> its program, such as wondering if some theory makes sense, or coming up with 
>> tests of a theory. Or say, it can't invent chess, it can only play it better 
>> than humans. It can't "think" out of the box. AG
> 
> Yes, keep repeating that over and over.  Repitition makes a convincing 
> argument...for some people.
> 
> Brent
> 
> What's your countervailing evidence? You want to think it can think, and 
> that's YOUR repetitious argument. AG 

What is your evidence for something not Turing emulable in the human brain. If 
string Ai is false (machine cannot think) then computationalism is false, but 
then something non Turing emulable exist playing a role in human consciousness: 
what is it? The pineal gland? the microtubules?

Bruno




> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 04:32, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:
> 
> 
> On 2/18/2018 9:58 AM, agrays...@gmail.com  wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com <> 
>> wrote:
>> 
>> 
>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>> > >  
>> > > <https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>
>> > >  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>> 
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>> 
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological change. 
>> 
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>> 
>> Cheers 
>> 
>> One thing a computer can not do is ask a question. I can ask a question and 
>> program a computer to help solve the problem. In fact I am doing a program 
>> to do just this. I am working a computer program to model aspects of 
>> gravitational memory. What the computer will not do, at least computers we 
>> currently employ will not do is to ask the question and then work to solve 
>> it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>> 
>> LC 
>> 
>> You've hit the proverbial nail on the head. If a computer can't ask a 
>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>> completely a misnomer to refer to it as "intelligent".  AG
>>  
>> It has no imagination. I doesn't wonder about anything. It's not conscious 
>> and therefore should not be considered as having consciousness or 
>> intelligence. AG 
> 
> Are you aware that AlphaGo Zero won one it's games by making a move that 
> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
> Zero's victory.  So one has to ask, how do you know so much about its inner 
> thoughts so that you can assert it can't ask a question, can't propose a new 
> theory,  doesn't wonder, and is not conscious?
> 
> Brent
> 
> If you give it a task, just about any task within its universe of discourse, 
> it will perform it hugely better than humans. But where is the evidence it 
> can initiate any task without being instructed? AG 

In the mathematics of computer science, especially the theory of 
self-reference. All the G* minus G theory, given by the machines, can be 
considered as the machine’s natural questions which imposes themselves on the 
machines looking inward. 

But this requires doing a bit of computer science, if that was not obvious when 
we assume computationalism.

Bruno



> 
> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com .
>> To post to this group, send email to everyth...@googlegroups.com 
>> .
>> Visit this group at https://groups.google.com/group/everything-list 
>> <https://groups.google.com/group/everything-list>.

Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 02:28, John Clark  wrote:
> 
> On Sun, Feb 18, 2018 at 7:51 PM, Lawrence Crowell 
> mailto:goldenfieldquaterni...@gmail.com>> 
> wrote:
> 
> ​> ​That is a canned. It is only a question because we recognize it as such, 
> not because the computer somehow knows that.
> 
> How would the computer behave differently if is did​ ​"​somehow knows that​" 
> ?​


By doing a strike until he get a more interesting users providing better answer 
to its question. By fighting for having social security, etc.

Bruno


> 
> ​John K Clark​
> 
>  
> 
>  
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 19 Feb 2018, at 00:26, John Clark  wrote:
> 
> On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell 
> mailto:goldenfieldquaterni...@gmail.com>> 
> wrote:
> 
> ​> ​One thing a computer can not do is ask a question.
> 
> You've never had a computer ask you what your password is?


That is usually a question asked by a human or a society, transmitted by a 
computer. Usually the computer does not support a person asking a question, 
unless you listen to their personal (self-referential, in both the 1p and 3p 
sense) question, like we do with G and G* (and the variants).

Bruno



> 
> ​John K Clark​
>  
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Bruno Marchal

> On 18 Feb 2018, at 18:54, agrayson2...@gmail.com wrote:
> 
> 
> 
> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
> > 
> > 
> > On 2/17/2018 4:58 PM, agrays...@gmail.com <> wrote: 
> > > But what is the criterion when AI exceeds human intelligence? AG 
> > > 
> > > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
> > >  
> > > <https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>
> > >  
> > 
> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
> > 
> > Brent 
> > 
> 
> According to the title (I haven't RTFA), it's the 
> singularity. Starting from a point where a machine designs, 
> and manufactures improved copies of itself, technology will supposedly 
> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
> hyperbolic, it reaches infinity within a finite period of time, 
> expected to be a matter of months perhaps. 
> 
> Given that we really don't understand creative processes (not even 
> good old fashioned biological evolution is really well understood), 
> I'm sceptical about the 30 years prognostication. It is mostly based on 
> extrapolating Moore's law, which is the easy part of technological change. 
> 
> This won't be a problem for my children - my grandchildren perhaps, if 
> I ever end up having any. 
> 
> Cheers 
> 
> One thing a computer can not do is ask a question. I can ask a question and 
> program a computer to help solve the problem. In fact I am doing a program to 
> do just this. I am working a computer program to model aspects of 
> gravitational memory. What the computer will not do, at least computers we 
> currently employ will not do is to ask the question and then work to solve 
> it. A computer can find a numerical solution or render something numerically, 
> but it does not spontaneously act to ask the question or to propose something 
> creative to then solve or render the solution.
> 
> LC 
> 
> You've hit the proverbial nail on the head. If a computer can't ask a 
> question, it can't, by itself, add to our knowledge. It can't propose a new 
> theory. It can only be a tool for humans to test our theories. Thus, it is 
> completely a misnomer to refer to it as "intelligent".  AG


But when we listen to the (Löbian) machine, which already exist (to be sure), 
we got already many questions, in fact much more question than answer, which 
means, indeed, that they are already intelligent.

The universal machine is born maximally incompetent and intelligent. By getting 
more competent, it becomes less intelligent. The singularity is in the past, 
and the new singularity is when the machine will be as stupid as the humans 
being, if we have not destroy the planet before.

Bruno



> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-19 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 10:00:24 PM UTC-6, Brent wrote:
>
>
>
> On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
>
> Computers such as AlphaGo have complex algorithms for taking the rules of 
> a game like chess and running through long Markov chains of game events to 
> increase their data base for playing the game. There is not really anything 
> about "knowing something" going on here. There is a lot of hype over AI 
> these days, but I suspect a lot of this is meant to beguile people. I do 
> suspect in time we will interact with AI as if it were intelligent and 
> conscious. The really big changer though I think will be the neural-cyber 
> interlink that will put brains as the primary internet nodes.
>
>
> Why would you suppose that when electronics have a signal speed ten 
> million times faster than neurons?  Presently neurons have an advantage in 
> connection density and power dissipation; but I see no reason they can hold 
> that advantage.
>
> Brent
>

I think it may come down to computers that obey the Church-Turing thesis, 
which is finite and bounded. Hofstadter's book *Godel Escher Bach* has a 
chapter Bloop, Floop, Gloop where the Bloop means bounded loop or a halting 
program on a Turing machine. Biology however is not Bloop, but is rather a 
web of processors that are more Floop, or free loop. The busy beaver 
algorithm is such a case, which grows in complexity with each step. The 
computation of many fractals is this as well, where the Mandelbrot set with 
each iteration on a certain scale needs refinement to another floating 
point precision and thus grows in huge complexity. These of course in 
practice halting because the programmer puts in by hand a stop. These are 
recursively enumerable, and their complement in a set theoretic sense are 
Godel loops or Gloop. For machines to have properties at least parallel to 
conscious behavior we really have to be running in at least Floop and maybe 
into Gloop.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 9:02:34 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 7:32 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 9:58 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
>> wrote: 
>>>
>>>
>>>
>>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell 
>>> wrote: 
>>>>
>>>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish 
>>>> wrote: 
>>>>>
>>>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>>>> > 
>>>>> > 
>>>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>>>> > > 
>>>>> > > 
>>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>>  
>>>>> > 
>>>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>>>> > 
>>>>> > Brent 
>>>>> > 
>>>>>
>>>>> According to the title (I haven't RTFA), it's the 
>>>>> singularity. Starting from a point where a machine designs, 
>>>>> and manufactures improved copies of itself, technology will supposedly 
>>>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>>>> hyperbolic, it reaches infinity within a finite period of time, 
>>>>> expected to be a matter of months perhaps. 
>>>>>
>>>>> Given that we really don't understand creative processes (not even 
>>>>> good old fashioned biological evolution is really well understood), 
>>>>> I'm sceptical about the 30 years prognostication. It is mostly based 
>>>>> on 
>>>>> extrapolating Moore's law, which is the easy part of technological 
>>>>> change. 
>>>>>
>>>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>>>> I ever end up having any. 
>>>>>
>>>>> Cheers 
>>>>>
>>>>
>>>> One thing a computer can not do is ask a question. I can ask a question 
>>>> and program a computer to help solve the problem. In fact I am doing a 
>>>> program to do just this. I am working a computer program to model aspects 
>>>> of gravitational memory. What the computer will not do, at least computers 
>>>> we currently employ will not do is to ask the question and then work to 
>>>> solve it. A computer can find a numerical solution or render something 
>>>> numerically, but it does not spontaneously act to ask the question or to 
>>>> propose something creative to then solve or render the solution.
>>>>
>>>> LC 
>>>>
>>>
>>> *You've hit the proverbial nail on the head. If a computer can't ask a 
>>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>>> completely a misnomer to refer to it as "intelligent".  AG*
>>>
>>  
>> *It has no imagination. I doesn't wonder about anything. It's not 
>> conscious and therefore should not be considered as having consciousness or 
>> intelligence. AG *
>>
>>
>> Are you aware that AlphaGo Zero won one it's games by making a move that 
>> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
>> Zero's victory.  So one has to ask, how do you know so much about its inner 
>> thoughts so that you can assert it can't ask a question, can't propose a 
>> new theory,  doesn't wonder, and is not conscious?
>>
>> Brent
>>
>
>
> *If you give it a task, just about any task within its universe of 
> discourse, it will perform it hugely better than humans. But where is the 
> evidence it can initiate any task without being instructed? AG *
>
>
> How were you instructed to get hungry, be curious, lust after women?
>
> Brent
>

*In some of those there are feedback loops which are reproducible in 
computers. But to affirm computer *consciousness* is a huge leap. In the 
video you posted, the design computer does what computers do best; process 
a huge number of repetitious tasks, more than any human can do. I don't see 
this as evidence of consciousness. AG*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:48 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:37:44 PM UTC-7, Brent wrote:



On 2/18/2018 12:19 PM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:



On 2/18/2018 5:05 AM, agrays...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:



On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7,
Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM
UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds
human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>


Intelligence is multi-dimensional. Computers
already do arithmetic and algebra and calculus
better than me. They play chess and go better
(although so far I beat the Chinese checkers
online :-) ).  They translate more languages,
and faster than I can.  They can take
dictation better. They can write music better
than me (since I'm not even competent).

So we need to sharpen the question.  Exactly
*what* is 30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will
progressively MIMIC human behavior and vastly
exceed it in various functions. But what is
"intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in
the '80s the professor explained that artificial
intelligence is whatever computers can't do yet.

Brent


Do you think there is anything about "consciousness"
that distinguishes it from what a computer can
eventually mimic? AG


I think a robot, i.e. a computer that can act in the
world, can be conscious and to have human level general
intelligence must be conscious, although perhaps in a
somewhat different way than humans.

Brent


Not made of flesh and blood, robot can't feel pain.


Why would you suppose that?


Thus, behavior determined by pure logic; merciless. That's
the danger. AG


Logic doesn't have any values; so pure logic is not motivated
to do anything.


*Without values, it can't be compassionate. *


Neither can it be passionate, or even interested, or even
motivated to do anything.  Yet our Mars Rovers already do things. 
You seem to be the poster boy for "Failure of Imagination".

Brent

*
*
*My former colleague at JPL sends commands to the Mars Rovers. They do 
what they're told to do; nothing more, or less. AG

*


If the Rover is told to go to certain coordinates...but not what path to 
take to avoid obstacles, then it must use intelligence.  I know the JPL 
doesn't not steer the Rover like an automobile.  The time delay is too 
great.


Brent



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:41 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:



On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:



On 2/18/2018 6:11 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker
wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>

>
> So we need to sharpen the question. Exactly *what* is
30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology
will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite period
of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes
(not even
good old fashioned biological evolution is really well
understood),
I'm sceptical about the 30 years prognostication. It is
mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my
grandchildren perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask
a question and program a computer to help solve the problem.
In fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory.
What the computer will not do, at least computers we
currently employ will not do is to ask the question and then
work to solve it. A computer can find a numerical solution
or render something numerically, but it does not
spontaneously act to ask the question or to propose
something creative to then solve or render the solution.


You must never have applied for a loan online.


It can only do what it has been programmed to do. I can't act
independent of its program, such as wondering if some theory
makes sense, or coming up with tests of a theory. Or say, it
can't invent chess, it can only play it better than humans. It
can't "think" out of the box. AG


Yes, keep repeating that over and over.  Repitition makes a
convincing argument...for some people.

Brent


*What's your countervailing evidence? You want to think it can think, 
and that's YOUR repetitious argument. AG

*


https://www.ted.com/talks/maurice_conti_the_incredible_inventions_of_intuitive_ai#t-184772

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 7:32 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:



On 2/18/2018 9:58 AM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 10:54:58 AM UTC-7,
agrays...@gmail.com wrote:



On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence
Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent
Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>

>
> So we need to sharpen the question. Exactly *what*
is 30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine
designs,
and manufactures improved copies of itself,
technology will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite
period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative
processes (not even
good old fashioned biological evolution is really
well understood),
I'm sceptical about the 30 years prognostication. It
is mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my
grandchildren perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can
ask a question and program a computer to help solve the
problem. In fact I am doing a program to do just this. I
am working a computer program to model aspects of
gravitational memory. What the computer will not do, at
least computers we currently employ will not do is to ask
the question and then work to solve it. A computer can
find a numerical solution or render something
numerically, but it does not spontaneously act to ask the
question or to propose something creative to then solve
or render the solution.

LC


*You've hit the proverbial nail on the head. If a computer
can't ask a question, it can't, by itself, add to our
knowledge. It can't propose a new theory. It can only be a
tool for humans to test our theories. Thus, it is completely
a misnomer to refer to it as "intelligent".  AG*

*It has no imagination. I doesn't wonder about anything. It's not
conscious and therefore should not be considered as having
consciousness or intelligence. AG *


Are you aware that AlphaGo Zero won one it's games by making a
move that centuries of Go players have consider wrong, and yet it
was key to AlphaGo Zero's victory.  So one has to ask, how do you
know so much about its inner thoughts so that you can assert it
can't ask a question, can't propose a new theory,  doesn't wonder,
and is not conscious?

Brent


*If you give it a task, just about any task within its universe of 
discourse, it will perform it hugely better than humans. But where is 
the evidence it can initiate any task without being instructed? AG

*


How were you instructed to get hungry, be curious, lust after women?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 6:26 PM, Lawrence Crowell wrote:
Computers such as AlphaGo have complex algorithms for taking the rules 
of a game like chess and running through long Markov chains of game 
events to increase their data base for playing the game. There is not 
really anything about "knowing something" going on here. There is a 
lot of hype over AI these days, but I suspect a lot of this is meant 
to beguile people. I do suspect in time we will interact with AI as if 
it were intelligent and conscious. The really big changer though I 
think will be the neural-cyber interlink that will put brains as the 
primary internet nodes.


Why would you suppose that when electronics have a signal speed ten 
million times faster than neurons?  Presently neurons have an advantage 
in connection density and power dissipation; but I see no reason they 
can hold that advantage.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 8:37:44 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 12:19 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 5:05 AM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>>>
>>>>
>>>>
>>>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>>>
>>>>
>>>>
>>>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>>>>
>>>>>
>>>>>
>>>>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>>>>
>>>>> But what is the criterion when AI exceeds human intelligence? AG
>>>>>
>>>>>
>>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>>
>>>>>
>>>>> Intelligence is multi-dimensional.  Computers already do arithmetic 
>>>>> and algebra and calculus better than me.  They play chess and go better 
>>>>> (although so far I beat the Chinese checkers online :-) ).  They 
>>>>> translate 
>>>>> more languages, and faster than I can.  They can take dictation better.  
>>>>> They can write music better than me (since I'm not even competent).
>>>>>
>>>>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>>>>
>>>>> Brent
>>>>>
>>>>
>>>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC 
>>>> human behavior and vastly exceed it in various functions. But what is 
>>>> "intelligence"? AFAICT, undefined. AG
>>>>
>>>>
>>>> When I took a series of courses in AI at UCLA in the '80s the professor 
>>>> explained that artificial intelligence is whatever computers can't do yet.
>>>>
>>>> Brent
>>>>
>>>
>>> Do you think there is anything about "consciousness" that distinguishes 
>>> it from what a computer can eventually mimic? AG
>>>
>>>
>>> I think a robot, i.e. a computer that can act in the world, can be 
>>> conscious and to have human level general intelligence must be conscious, 
>>> although perhaps in a somewhat different way than humans.
>>>
>>> Brent
>>>
>>
>> Not made of flesh and blood, robot can't feel pain. 
>>
>>
>> Why would you suppose that?
>>
>> Thus, behavior determined by pure logic; merciless. That's the danger. AG 
>>
>>
>> Logic doesn't have any values; so pure logic is not motivated to do 
>> anything.
>>
>
> *Without values, it can't be compassionate. *
>
>
> Neither can it be passionate, or even interested, or even motivated to do 
> anything.  Yet our Mars Rovers already do things.  You seem to be the 
> poster boy for "Failure of Imagination".
>
> Brent
>

*My former colleague at JPL sends commands to the Mars Rovers. They do what 
they're told to do; nothing more, or less. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 8:35:59 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 12:15 PM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>>
>> You must never have applied for a loan online.
>>
>
> It can only do what it has been programmed to do. I can't act independent 
> of its program, such as wondering if some theory makes sense, or coming up 
> with tests of a theory. Or say, it can't invent chess, it can only play it 
> better than humans. It can't "think" out of the box. AG
>
>
> Yes, keep repeating that over and over.  Repitition makes a convincing 
> argument...for some people.
>
> Brent
>

*What's your countervailing evidence? You want to think it can think, and 
that's YOUR repetitious argument. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 12:46 PM, Lawrence Crowell wrote:



One thing a computer can not do is ask a question. I can ask a
question and program a computer to help solve the problem. In
fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory. What
the computer will not do, at least computers we currently employ
will not do is to ask the question and then work to solve it. A
computer can find a numerical solution or render something
numerically, but it does not spontaneously act to ask the
question or to propose something creative to then solve or render
the solution.


You must never have applied for a loan online.

Brent


I am not sure how that is relevant. No I have not applied for a loan 
online. In fact about 10 years ago or so I made a choice not to do 
financial transactions online. Of course in some sense this means I am 
becoming a bit of a slowpoke in that game, but I have worked to reduce 
my footprint on the digital landscape and to keep my financial 
decisions offline. This reduces my prospects for cyber-snooping and 
having personal information flying around out there.


1. It's an algorithm.
2. It asks lots of questions.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 12:19 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:



On 2/18/2018 5:05 AM, agrays...@gmail.com  wrote:



On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:



On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent
wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7,
Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds human
intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>


Intelligence is multi-dimensional. Computers
already do arithmetic and algebra and calculus
better than me.  They play chess and go better
(although so far I beat the Chinese checkers online
:-) ).  They translate more languages, and faster
than I can.  They can take dictation better.  They
can write music better than me (since I'm not even
competent).

So we need to sharpen the question. Exactly *what*
is 30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will
progressively MIMIC human behavior and vastly exceed it
in various functions. But what is "intelligence"?
AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the
'80s the professor explained that artificial
intelligence is whatever computers can't do yet.

Brent


Do you think there is anything about "consciousness" that
distinguishes it from what a computer can eventually mimic? AG


I think a robot, i.e. a computer that can act in the world,
can be conscious and to have human level general intelligence
must be conscious, although perhaps in a somewhat different
way than humans.

Brent


Not made of flesh and blood, robot can't feel pain.


Why would you suppose that?


Thus, behavior determined by pure logic; merciless. That's the
danger. AG


Logic doesn't have any values; so pure logic is not motivated to
do anything.


*Without values, it can't be compassionate. *


Neither can it be passionate, or even interested, or even motivated to 
do anything.  Yet our Mars Rovers already do things.  You seem to be the 
poster boy for "Failure of Imagination".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 12:15 PM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:



On 2/18/2018 6:11 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>

>
> So we need to sharpen the question.  Exactly *what* is
30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology will
supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes (not
even
good old fashioned biological evolution is really well
understood),
I'm sceptical about the 30 years prognostication. It is
mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my grandchildren
perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask a
question and program a computer to help solve the problem. In
fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory. What
the computer will not do, at least computers we currently employ
will not do is to ask the question and then work to solve it. A
computer can find a numerical solution or render something
numerically, but it does not spontaneously act to ask the
question or to propose something creative to then solve or render
the solution.


You must never have applied for a loan online.


It can only do what it has been programmed to do. I can't act 
independent of its program, such as wondering if some theory makes 
sense, or coming up with tests of a theory. Or say, it can't invent 
chess, it can only play it better than humans. It can't "think" out of 
the box. AG


Yes, keep repeating that over and over.  Repitition makes a convincing 
argument...for some people.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 8:24:40 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 9:58 AM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
> wrote: 
>>
>>
>>
>> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote: 
>>>
>>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish 
>>> wrote: 
>>>>
>>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>>> > 
>>>> > 
>>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>>> > > 
>>>> > > 
>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>  
>>>> > 
>>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>>> > 
>>>> > Brent 
>>>> > 
>>>>
>>>> According to the title (I haven't RTFA), it's the 
>>>> singularity. Starting from a point where a machine designs, 
>>>> and manufactures improved copies of itself, technology will supposedly 
>>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>>> hyperbolic, it reaches infinity within a finite period of time, 
>>>> expected to be a matter of months perhaps. 
>>>>
>>>> Given that we really don't understand creative processes (not even 
>>>> good old fashioned biological evolution is really well understood), 
>>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>>> extrapolating Moore's law, which is the easy part of technological 
>>>> change. 
>>>>
>>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>>> I ever end up having any. 
>>>>
>>>> Cheers 
>>>>
>>>
>>> One thing a computer can not do is ask a question. I can ask a question 
>>> and program a computer to help solve the problem. In fact I am doing a 
>>> program to do just this. I am working a computer program to model aspects 
>>> of gravitational memory. What the computer will not do, at least computers 
>>> we currently employ will not do is to ask the question and then work to 
>>> solve it. A computer can find a numerical solution or render something 
>>> numerically, but it does not spontaneously act to ask the question or to 
>>> propose something creative to then solve or render the solution.
>>>
>>> LC 
>>>
>>
>> *You've hit the proverbial nail on the head. If a computer can't ask a 
>> question, it can't, by itself, add to our knowledge. It can't propose a new 
>> theory. It can only be a tool for humans to test our theories. Thus, it is 
>> completely a misnomer to refer to it as "intelligent".  AG*
>>
>  
> *It has no imagination. I doesn't wonder about anything. It's not 
> conscious and therefore should not be considered as having consciousness or 
> intelligence. AG *
>
>
> Are you aware that AlphaGo Zero won one it's games by making a move that 
> centuries of Go players have consider wrong, and yet it was key to AlphaGo 
> Zero's victory.  So one has to ask, how do you know so much about its inner 
> thoughts so that you can assert it can't ask a question, can't propose a 
> new theory,  doesn't wonder, and is not conscious?
>
> Brent
>

*If you give it a task, just about any task within its universe of 
discourse, it will perform it hugely better than humans. But where is the 
evidence it can initiate any task without being instructed? AG *

>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 9:58 AM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
wrote:




On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell
wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell
Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
> > But what is the criterion when AI exceeds human
intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>

>
> So we need to sharpen the question.  Exactly *what* is
30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology
will supposedly
veer from it's exponential path (Moore's law) etc to
hyperbolic. Being
hyperbolic, it reaches infinity within a finite period of
time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes
(not even
good old fashioned biological evolution is really well
understood),
I'm sceptical about the 30 years prognostication. It is
mostly based on
extrapolating Moore's law, which is the easy part of
technological change.

This won't be a problem for my children - my grandchildren
perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask a
question and program a computer to help solve the problem. In
fact I am doing a program to do just this. I am working a
computer program to model aspects of gravitational memory.
What the computer will not do, at least computers we currently
employ will not do is to ask the question and then work to
solve it. A computer can find a numerical solution or render
something numerically, but it does not spontaneously act to
ask the question or to propose something creative to then
solve or render the solution.

LC


*You've hit the proverbial nail on the head. If a computer can't
ask a question, it can't, by itself, add to our knowledge. It
can't propose a new theory. It can only be a tool for humans to
test our theories. Thus, it is completely a misnomer to refer to
it as "intelligent".  AG*

*It has no imagination. I doesn't wonder about anything. It's not 
conscious and therefore should not be considered as having 
consciousness or intelligence. AG *


Are you aware that AlphaGo Zero won one it's games by making a move that 
centuries of Go players have consider wrong, and yet it was key to 
AlphaGo Zero's victory.  So one has to ask, how do you know so much 
about its inner thoughts so that you can assert it can't ask a question, 
can't propose a new theory,  doesn't wonder, and is not conscious?


Brent



--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
Computers such as AlphaGo have complex algorithms for taking the rules of a 
game like chess and running through long Markov chains of game events to 
increase their data base for playing the game. There is not really anything 
about "knowing something" going on here. There is a lot of hype over AI 
these days, but I suspect a lot of this is meant to beguile people. I do 
suspect in time we will interact with AI as if it were intelligent and 
conscious. The really big changer though I think will be the neural-cyber 
interlink that will put brains as the primary internet nodes.

LC

On Sunday, February 18, 2018 at 7:21:19 PM UTC-6, John Clark wrote:
>
> On Sun, Feb 18, 2018 at 3:15 PM, > 
> wrote:
>
> > It can only do what it has been programmed to do. I can't act 
>> independent of its program
>>
>  
> ​
> Suppose you know
> ​ 
> absolutely nothing about Chess, you're not given a teacher, you
> ​ 
> are not
> ​
>  even given a book on Chess, all you're given is is a short pamphlet 
> explaining the
> ​ 
> basic
> ​ 
> rules of the game, and just 24 hours later you
> ​ 
> taught yourself 
> ​the game ​
> so well that not only
> ​ ​
> ​
> can you
> ​ 
> beat any other
> ​ 
> human being
> ​ 
> on the planet at Chess but you also can beat any
> ​ 
> other Chess program
> ​ 
> at
> ​ Chess. And you're not specialized, ​you're not just good at one thing 
> because during that same 24 hours you also taught yourself to be the 
> world's best Shogi player (a game popular in Japan) and most impressive of 
> all  
> you beat the very specialized program that beat the wold's best player of 
> the immensely complex game GO. That is exactly what the computer program 
> AlphaGo
> ​ 
> did just last December.
>
>
> https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf
>
> https://deepmind.com/research/alphago/
> . 
>
>> ​> ​
>> it can only play it better than humans. It can't "think" out of the box.
>
>
>  
> ​Whistling past the graveyard. 
>
> John K Clark  ​
>
>
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 7:51 PM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​That is a canned. It is only a question because we recognize it as
> such, not because the computer somehow knows that.*
>

How would the computer behave differently if is did
​
​"​
somehow knows that
​" ?​

​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 3:15 PM,  wrote:

> It can only do what it has been programmed to do. I can't act independent
> of its program
>

​
Suppose you know
​
absolutely nothing about Chess, you're not given a teacher, you
​
are not
​
 even given a book on Chess, all you're given is is a short pamphlet
explaining the
​
basic
​
rules of the game, and just 24 hours later you
​
taught yourself
​the game ​
so well that not only
​ ​
​
can you
​
beat any other
​
human being
​
on the planet at Chess but you also can beat any
​
other Chess program
​
at
​ Chess. And you're not specialized, ​you're not just good at one thing
because during that same 24 hours you also taught yourself to be the
world's best Shogi player (a game popular in Japan) and most impressive of
all
you beat the very specialized program that beat the wold's best player of
the immensely complex game GO. That is exactly what the computer program
AlphaGo
​
did just last December.

https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf

https://deepmind.com/research/alphago/
.

> ​> ​
> it can only play it better than humans. It can't "think" out of the box.



​Whistling past the graveyard.

John K Clark  ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 5:26:04 PM UTC-6, John Clark wrote:
>
> On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell <
> goldenfield...@gmail.com > wrote:
>
> *​> ​One thing a computer can not do is ask a question.*
>>
>
> You've never had a computer ask you what your password is?
>
> ​John K Clark​
>

That is a canned. It is only a question because we recognize it as such, 
not because the computer somehow knows that.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 1:46:55 PM UTC-7, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 1:09:37 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>> P
>> You must never have applied for a loan online.
>>
>> Brent
>>
>
> I am not sure how that is relevant. 
>

*Brent was referring to the many questions a computer asks when someone 
applies for loan online. Of course, the issue here is whether a computer 
can ask a question that is not pre-programmed. It cannot IMO. People will 
argue that humans can only ask questions which are, in effect, 
pre-programmed. One can argue they cannot do so by referring to any new 
theory in physics. Can a computer ask a question about something it has 
newly been informed about? If informed, can it ask any specific question if 
not pre-programmed to do so? AG*

 

> No I have not applied for a loan online. In fact about 10 years ago or so 
> I made a choice not to do financial transactions online. Of course in some 
> sense this means I am becoming a bit of a slowpoke in that game, but I have 
> worked to reduce my footprint on the digital landscape and to keep my 
> financial decisions offline. This reduces my prospects for cyber-snooping 
> and having personal information flying around out there.
>
> LC
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread John Clark
On Sun, Feb 18, 2018 at 9:11 AM, Lawrence Crowell <
goldenfieldquaterni...@gmail.com> wrote:

*​> ​One thing a computer can not do is ask a question.*
>

You've never had a computer ask you what your password is?

​John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 1:09:37 PM UTC-6, Brent wrote:
>
>
>
> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
>
> You must never have applied for a loan online.
>
> Brent
>

I am not sure how that is relevant. No I have not applied for a loan 
online. In fact about 10 years ago or so I made a choice not to do 
financial transactions online. Of course in some sense this means I am 
becoming a bit of a slowpoke in that game, but I have worked to reduce my 
footprint on the digital landscape and to keep my financial decisions 
offline. This reduces my prospects for cyber-snooping and having personal 
information flying around out there.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:03:28 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 5:05 AM, agrays...@gmail.com  wrote:
>
>
>
> On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 10:28 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>>
>>>
>>>
>>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>>>
>>>>
>>>>
>>>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>>>
>>>> But what is the criterion when AI exceeds human intelligence? AG
>>>>
>>>>
>>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>>
>>>>
>>>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>>>> algebra and calculus better than me.  They play chess and go better 
>>>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>>>> more languages, and faster than I can.  They can take dictation better.  
>>>> They can write music better than me (since I'm not even competent).
>>>>
>>>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>>>
>>>> Brent
>>>>
>>>
>>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
>>> behavior and vastly exceed it in various functions. But what is 
>>> "intelligence"? AFAICT, undefined. AG
>>>
>>>
>>> When I took a series of courses in AI at UCLA in the '80s the professor 
>>> explained that artificial intelligence is whatever computers can't do yet.
>>>
>>> Brent
>>>
>>
>> Do you think there is anything about "consciousness" that distinguishes 
>> it from what a computer can eventually mimic? AG
>>
>>
>> I think a robot, i.e. a computer that can act in the world, can be 
>> conscious and to have human level general intelligence must be conscious, 
>> although perhaps in a somewhat different way than humans.
>>
>> Brent
>>
>
> Not made of flesh and blood, robot can't feel pain. 
>
>
> Why would you suppose that?
>
> Thus, behavior determined by pure logic; merciless. That's the danger. AG 
>
>
> Logic doesn't have any values; so pure logic is not motivated to do 
> anything.
>

*Without values, it can't be compassionate. It's like a human who enjoys a 
juicy hamburger, but has no thought of the pain of the cow who died to 
provide it. AG *

>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:09:37 PM UTC-7, Brent wrote:
>
>
>
> On 2/18/2018 6:11 AM, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote: 
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
>
> You must never have applied for a loan online.
>

It can only do what it has been programmed to do. I can't act independent 
of its program, such as wondering if some theory makes sense, or coming up 
with tests of a theory. Or say, it can't invent chess, it can only play it 
better than humans. It can't "think" out of the box. AG 

>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 6:11 AM, Lawrence Crowell wrote:

On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:

On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:
> > But what is the criterion when AI exceeds human intelligence? AG
> >
> >

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>

>
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>
> Brent
>

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology will
supposedly
veer from it's exponential path (Moore's law) etc to hyperbolic.
Being
hyperbolic, it reaches infinity within a finite period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes (not even
good old fashioned biological evolution is really well understood),
I'm sceptical about the 30 years prognostication. It is mostly
based on
extrapolating Moore's law, which is the easy part of technological
change.

This won't be a problem for my children - my grandchildren
perhaps, if
I ever end up having any.

Cheers


One thing a computer can not do is ask a question. I can ask a 
question and program a computer to help solve the problem. In fact I 
am doing a program to do just this. I am working a computer program to 
model aspects of gravitational memory. What the computer will not do, 
at least computers we currently employ will not do is to ask the 
question and then work to solve it. A computer can find a numerical 
solution or render something numerically, but it does not 
spontaneously act to ask the question or to propose something creative 
to then solve or render the solution.


You must never have applied for a loan online.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Brent Meeker



On 2/18/2018 5:05 AM, agrayson2...@gmail.com wrote:



On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:



On 2/17/2018 10:28 PM, agrays...@gmail.com  wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent
wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds human
intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>


Intelligence is multi-dimensional. Computers already do
arithmetic and algebra and calculus better than me. 
They play chess and go better (although so far I beat
the Chinese checkers online :-) ).  They translate more
languages, and faster than I can.  They can take
dictation better.  They can write music better than me
(since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is
30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively
MIMIC human behavior and vastly exceed it in various
functions. But what is "intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the
professor explained that artificial intelligence is whatever
computers can't do yet.

Brent


Do you think there is anything about "consciousness" that
distinguishes it from what a computer can eventually mimic? AG


I think a robot, i.e. a computer that can act in the world, can be
conscious and to have human level general intelligence must be
conscious, although perhaps in a somewhat different way than humans.

Brent


Not made of flesh and blood, robot can't feel pain.


Why would you suppose that?


Thus, behavior determined by pure logic; merciless. That's the danger. AG


Logic doesn't have any values; so pure logic is not motivated to do 
anything.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 10:54:58 AM UTC-7, agrays...@gmail.com 
wrote:
>
>
>
> On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>>
>> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>>
>>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>>> > 
>>> > 
>>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>>> > > But what is the criterion when AI exceeds human intelligence? AG 
>>> > > 
>>> > > 
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>  
>>> > 
>>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>>> > 
>>> > Brent 
>>> > 
>>>
>>> According to the title (I haven't RTFA), it's the 
>>> singularity. Starting from a point where a machine designs, 
>>> and manufactures improved copies of itself, technology will supposedly 
>>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>>> hyperbolic, it reaches infinity within a finite period of time, 
>>> expected to be a matter of months perhaps. 
>>>
>>> Given that we really don't understand creative processes (not even 
>>> good old fashioned biological evolution is really well understood), 
>>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>>> extrapolating Moore's law, which is the easy part of technological 
>>> change. 
>>>
>>> This won't be a problem for my children - my grandchildren perhaps, if 
>>> I ever end up having any. 
>>>
>>> Cheers 
>>>
>>
>> One thing a computer can not do is ask a question. I can ask a question 
>> and program a computer to help solve the problem. In fact I am doing a 
>> program to do just this. I am working a computer program to model aspects 
>> of gravitational memory. What the computer will not do, at least computers 
>> we currently employ will not do is to ask the question and then work to 
>> solve it. A computer can find a numerical solution or render something 
>> numerically, but it does not spontaneously act to ask the question or to 
>> propose something creative to then solve or render the solution.
>>
>> LC 
>>
>
> *You've hit the proverbial nail on the head. If a computer can't ask a 
> question, it can't, by itself, add to our knowledge. It can't propose a new 
> theory. It can only be a tool for humans to test our theories. Thus, it is 
> completely a misnomer to refer to it as "intelligent".  AG*
>
 
*It has no imagination. I doesn't wonder about anything. It's not conscious 
and therefore should not be considered as having consciousness or 
intelligence. AG *

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 7:11:38 AM UTC-7, Lawrence Crowell wrote:
>
> On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>>
>> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
>> > 
>> > 
>> > On 2/17/2018 4:58 PM, agrays...@gmail.com wrote: 
>> > > But what is the criterion when AI exceeds human intelligence? AG 
>> > > 
>> > > 
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>  
>> > 
>> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
>> > 
>> > Brent 
>> > 
>>
>> According to the title (I haven't RTFA), it's the 
>> singularity. Starting from a point where a machine designs, 
>> and manufactures improved copies of itself, technology will supposedly 
>> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
>> hyperbolic, it reaches infinity within a finite period of time, 
>> expected to be a matter of months perhaps. 
>>
>> Given that we really don't understand creative processes (not even 
>> good old fashioned biological evolution is really well understood), 
>> I'm sceptical about the 30 years prognostication. It is mostly based on 
>> extrapolating Moore's law, which is the easy part of technological 
>> change. 
>>
>> This won't be a problem for my children - my grandchildren perhaps, if 
>> I ever end up having any. 
>>
>> Cheers 
>>
>
> One thing a computer can not do is ask a question. I can ask a question 
> and program a computer to help solve the problem. In fact I am doing a 
> program to do just this. I am working a computer program to model aspects 
> of gravitational memory. What the computer will not do, at least computers 
> we currently employ will not do is to ask the question and then work to 
> solve it. A computer can find a numerical solution or render something 
> numerically, but it does not spontaneously act to ask the question or to 
> propose something creative to then solve or render the solution.
>
> LC 
>

*You've hit the proverbial nail on the head. If a computer can't ask a 
question, it can't, by itself, add to our knowledge. It can't propose a new 
theory. It can only be a tool for humans to test our theories. Thus, it is 
completely a misnomer to refer to it as "intelligent".  AG*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Lawrence Crowell
On Sunday, February 18, 2018 at 4:25:07 AM UTC-6, Russell Standish wrote:
>
> On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote: 
> > 
> > 
> > On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote: 
> > > But what is the criterion when AI exceeds human intelligence? AG 
> > > 
> > > 
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>  
> > 
> > So we need to sharpen the question.  Exactly *what* is 30yrs away? 
> > 
> > Brent 
> > 
>
> According to the title (I haven't RTFA), it's the 
> singularity. Starting from a point where a machine designs, 
> and manufactures improved copies of itself, technology will supposedly 
> veer from it's exponential path (Moore's law) etc to hyperbolic. Being 
> hyperbolic, it reaches infinity within a finite period of time, 
> expected to be a matter of months perhaps. 
>
> Given that we really don't understand creative processes (not even 
> good old fashioned biological evolution is really well understood), 
> I'm sceptical about the 30 years prognostication. It is mostly based on 
> extrapolating Moore's law, which is the easy part of technological change. 
>
> This won't be a problem for my children - my grandchildren perhaps, if 
> I ever end up having any. 
>
> Cheers 
>

One thing a computer can not do is ask a question. I can ask a question and 
program a computer to help solve the problem. In fact I am doing a program 
to do just this. I am working a computer program to model aspects of 
gravitational memory. What the computer will not do, at least computers we 
currently employ will not do is to ask the question and then work to solve 
it. A computer can find a numerical solution or render something 
numerically, but it does not spontaneously act to ask the question or to 
propose something creative to then solve or render the solution.

LC 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread agrayson2000


On Sunday, February 18, 2018 at 12:34:47 AM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 10:28 PM, agrays...@gmail.com  wrote:
>
>
>
> On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 5:44 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>>
>>>
>>>
>>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>>
>>> But what is the criterion when AI exceeds human intelligence? AG
>>>
>>>
>>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>>
>>>
>>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>>> algebra and calculus better than me.  They play chess and go better 
>>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>>> more languages, and faster than I can.  They can take dictation better.  
>>> They can write music better than me (since I'm not even competent).
>>>
>>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>>
>>> Brent
>>>
>>
>> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
>> behavior and vastly exceed it in various functions. But what is 
>> "intelligence"? AFAICT, undefined. AG
>>
>>
>> When I took a series of courses in AI at UCLA in the '80s the professor 
>> explained that artificial intelligence is whatever computers can't do yet.
>>
>> Brent
>>
>
> Do you think there is anything about "consciousness" that distinguishes it 
> from what a computer can eventually mimic? AG
>
>
> I think a robot, i.e. a computer that can act in the world, can be 
> conscious and to have human level general intelligence must be conscious, 
> although perhaps in a somewhat different way than humans.
>
> Brent
>

Not made of flesh and blood, robot can't feel pain. Thus, behavior 
determined by pure logic; merciless. That's the danger. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-18 Thread Russell Standish
On Sat, Feb 17, 2018 at 05:19:22PM -0800, Brent Meeker wrote:
> 
> 
> On 2/17/2018 4:58 PM, agrayson2...@gmail.com wrote:
> > But what is the criterion when AI exceeds human intelligence? AG
> > 
> > https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
> 
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
> 
> Brent
> 

According to the title (I haven't RTFA), it's the
singularity. Starting from a point where a machine designs,
and manufactures improved copies of itself, technology will supposedly
veer from it's exponential path (Moore's law) etc to hyperbolic. Being
hyperbolic, it reaches infinity within a finite period of time,
expected to be a matter of months perhaps.

Given that we really don't understand creative processes (not even
good old fashioned biological evolution is really well understood),
I'm sceptical about the 30 years prognostication. It is mostly based on
extrapolating Moore's law, which is the easy part of technological change.

This won't be a problem for my children - my grandchildren perhaps, if
I ever end up having any.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 10:28 PM, agrayson2...@gmail.com wrote:



On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:



On 2/17/2018 5:44 PM, agrays...@gmail.com  wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:

But what is the criterion when AI exceeds human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>


Intelligence is multi-dimensional.  Computers already do
arithmetic and algebra and calculus better than me.  They
play chess and go better (although so far I beat the Chinese
checkers online :-) ).  They translate more languages, and
faster than I can.  They can take dictation better.  They can
write music better than me (since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is 30yrs
away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively
MIMIC human behavior and vastly exceed it in various functions.
But what is "intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the
professor explained that artificial intelligence is whatever
computers can't do yet.

Brent


Do you think there is anything about "consciousness" that 
distinguishes it from what a computer can eventually mimic? AG


I think a robot, i.e. a computer that can act in the world, can be 
conscious and to have human level general intelligence must be 
conscious, although perhaps in a somewhat different way than humans.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000


On Saturday, February 17, 2018 at 10:50:13 PM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 5:44 PM, agrays...@gmail.com  wrote:
>
>
>
> On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote: 
>>
>>
>>
>> On 2/17/2018 4:58 PM, agrays...@gmail.com wrote:
>>
>> But what is the criterion when AI exceeds human intelligence? AG
>>
>>
>> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>>
>>
>> Intelligence is multi-dimensional.  Computers already do arithmetic and 
>> algebra and calculus better than me.  They play chess and go better 
>> (although so far I beat the Chinese checkers online :-) ).  They translate 
>> more languages, and faster than I can.  They can take dictation better.  
>> They can write music better than me (since I'm not even competent).
>>
>> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>>
>> Brent
>>
>
> Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
> behavior and vastly exceed it in various functions. But what is 
> "intelligence"? AFAICT, undefined. AG
>
>
> When I took a series of courses in AI at UCLA in the '80s the professor 
> explained that artificial intelligence is whatever computers can't do yet.
>
> Brent
>

Do you think there is anything about "consciousness" that distinguishes it 
from what a computer can eventually mimic? AG

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 5:44 PM, agrayson2...@gmail.com wrote:



On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:



On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:

But what is the criterion when AI exceeds human intelligence? AG


https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

<https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away>


Intelligence is multi-dimensional.  Computers already do
arithmetic and algebra and calculus better than me.  They play
chess and go better (although so far I beat the Chinese checkers
online :-) ).  They translate more languages, and faster than I
can.  They can take dictation better.  They can write music better
than me (since I'm not even competent).

So we need to sharpen the question.  Exactly *what* is 30yrs away?

Brent


Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC 
human behavior and vastly exceed it in various functions. But what is 
"intelligence"? AFAICT, undefined. AG


When I took a series of courses in AI at UCLA in the '80s the professor 
explained that artificial intelligence is whatever computers can't do yet.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000


On Saturday, February 17, 2018 at 6:19:28 PM UTC-7, Brent wrote:
>
>
>
> On 2/17/2018 4:58 PM, agrays...@gmail.com  wrote:
>
> But what is the criterion when AI exceeds human intelligence? AG
>
>
> https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away
>
>
> Intelligence is multi-dimensional.  Computers already do arithmetic and 
> algebra and calculus better than me.  They play chess and go better 
> (although so far I beat the Chinese checkers online :-) ).  They translate 
> more languages, and faster than I can.  They can take dictation better.  
> They can write music better than me (since I'm not even competent).
>
> So we need to sharpen the question.  Exactly *what* is 30yrs away?
>
> Brent
>

Exactly! Remember "Blade Runner"? IMO, AI will progressively MIMIC human 
behavior and vastly exceed it in various functions. But what is 
"intelligence"? AFAICT, undefined. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Singularity -- when AI exceeds human intelligence

2018-02-17 Thread Brent Meeker



On 2/17/2018 4:58 PM, agrayson2...@gmail.com wrote:

But what is the criterion when AI exceeds human intelligence? AG

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away


Intelligence is multi-dimensional.  Computers already do arithmetic and 
algebra and calculus better than me.  They play chess and go better 
(although so far I beat the Chinese checkers online :-) ). They 
translate more languages, and faster than I can.  They can take 
dictation better.  They can write music better than me (since I'm not 
even competent).


So we need to sharpen the question.  Exactly *what* is 30yrs away?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Singularity -- when AI exceeds human intelligence

2018-02-17 Thread agrayson2000
But what is the criterion when AI exceeds human intelligence? AG

https://www.zerohedge.com/news/2018-02-16/father-artificial-intelligence-singularity-less-30-years-away

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-31 Thread Bruno Marchal

> On 31 Jan 2018, at 03:14, Brent Meeker  wrote:
> 
> 
> 
> On 1/29/2018 2:41 AM, Bruno Marchal wrote:
>>> On 29 Jan 2018, at 01:35, Brent Meeker  wrote:
>>> 
>>> 
>>> 
>>> On 1/28/2018 6:38 AM, Bruno Marchal wrote:
> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>> A brain, if this is confirmed, is a consciousness filter. It makes us 
>> less conscious, and even less intelligent, but more efficacious in the 
>> terrestrial plane.
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more interference 
> and consciousness is gone completely (ever had a concussion?).
 You can drastically change the human consciousness, but you cannot 
 drastically change the universal machine consciousness which found it, no 
 more than you can change the arithmetical relation.
>>> But now you are violating your own empirical inferences that suggested 
>>> identifying human consciousness with the self-referential possibilities
>> 
>> That is weird Brent.
>> 
>> 
>> 
>>> of modal logic applied to computation.
>> 
>> First the modal logic is not applied to something, but extracted from 
>> something. It is just a fact that the modal logic G and G* are sound and 
>> complete for the logic of self-reference of self-referential correct 
>> machine. It is a theorem in arithmetic: self-referentially correct machines 
>> obeys to G and G*, and they have the 1p and 3p variant. That applies to all 
>> machines, and to human as far as they are self-referentially correct.
> 
> But I see little reasons to suppose they are.

If mechanism is true, and if the level of substitution is correct, then as long 
as they apply the classical rule, they will be, by definition.

Then to derive the correct laws of physics, you have to limit yourself to the 
self-referentially correct machines. It does not matter if we, and they, cannot 
distinguish correct machines from non correct machine. 

The Universal Dovetailer argument shows that physics is some statistic on the 
first person point of view, so let us examine the logics of those point of 
view, as incompleteness “resurrects” the standard classical definitions.





> 
>> That is vindicated both by the similarity of the machine theology with the 
>> talk of the rationalist mystics (Plato’s Parmenides, Moderatus of Gades, 
>> Plotinus, Proclus, etc.) and by the fact that it predicts quantum logics for 
>> quanta (and qualia).
> 
> I don't see that you have even derived the existence of quanta.

An arithmetically complete, at the propositional level, quantum logic is 
provided.

The degree of departure between that machine’s observable logic and the 
physical experiences will provide a way to evaluate the following disjunction: 
computationalism (precisely YD+CT) is wrong or we belong to a “bostromian”-like 
malevolent (second order) emulation. 




> 
>> 
>> Then, the human have richer content of consciousness than say, a subroutine 
>> in my laptop, and the richness can be handled with Bennett’s notion of 
>> depth, which we have, but my laptop lacks.
>> 
>> I do not see the violation that you are talking about. Consciousness is the 
>> same for all entities, in all state, but it can have different content, 
>> intensities, depth, etc.
> 
> But that's simply your assertion.

Not at all. It is the statement of all the reasoner of type 4, well motivated 
by Smullyan in "Forever Undecided”, and they get Löbian when they visit Löb’s 
Island, and here is what happens: all machines which talk correctly about 
themselves get the same logic. A famous lemma by Gödel, called the Diagonal 
Lemma, makes this mandatory for all machines. 

Then what could it mean that they are different? Once we agree that the content 
and even something like a “volume” or “intensity” can be different, what would 
it means to say that consciousness of worms, bats, humans, machines, aliens, 
angels, gods and god differ, assume those things/persons are conscious.



>   I can see there is reason to believe that all computation is the same 
> (Church-Turing thesis) and that consciousness is some kind of computation. 

…consciousness is related to some computations. Consciousness is not a kind of 
computation, unless you meant consciousness is some mode of self-observation 
related to relevant computation/semi-computable number relations. 

If you are OK with Church’s thesis, instead of choosing the formalism of 
Turing, or Church, or Post, Markov, Curry, etc., I can choose elementary 
arithmetic, for the primitive base, and I can define a computation by a 
semi-computable relation, which can be proved equivalent with the sigma_1 
relations (to be short and avoid technical nuances).

Then the measure one, well captured “modally” by the []p & <>t intensional 
variant of []p, limited to p sigma_1 arithmetical sentences. They are those 
equivalent to Ex

Re: Positive AI

2018-01-30 Thread Brent Meeker



On 1/29/2018 2:41 AM, Bruno Marchal wrote:

On 29 Jan 2018, at 01:35, Brent Meeker  wrote:



On 1/28/2018 6:38 AM, Bruno Marchal wrote:

On 26 Jan 2018, at 02:49, Brent Meeker  wrote:


On 1/25/2018 4:20 AM, Bruno Marchal wrote:

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more interference and 
consciousness is gone completely (ever had a concussion?).

You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation.

But now you are violating your own empirical inferences that suggested 
identifying human consciousness with the self-referential possibilities


That is weird Brent.




of modal logic applied to computation.


First the modal logic is not applied to something, but extracted from 
something. It is just a fact that the modal logic G and G* are sound and 
complete for the logic of self-reference of self-referential correct machine. 
It is a theorem in arithmetic: self-referentially correct machines obeys to G 
and G*, and they have the 1p and 3p variant. That applies to all machines, and 
to human as far as they are self-referentially correct.


But I see little reasons to suppose they are.


That is vindicated both by the similarity of the machine theology with the talk 
of the rationalist mystics (Plato’s Parmenides, Moderatus of Gades, Plotinus, 
Proclus, etc.) and by the fact that it predicts quantum logics for quanta (and 
qualia).


I don't see that you have even derived the existence of quanta.



Then, the human have richer content of consciousness than say, a subroutine in 
my laptop, and the richness can be handled with Bennett’s notion of depth, 
which we have, but my laptop lacks.

I do not see the violation that you are talking about. Consciousness is the 
same for all entities, in all state, but it can have different content, 
intensities, depth, etc.


But that's simply your assertion.  I can see there is reason to believe 
that all computation is the same (Church-Turing thesis) and that 
consciousness is some kind of computation.  But beyond that human 
consciousness seems quite different from the computation's you invoke by 
"interviewing" an ideal machine.  We start from intuition and 
observation of our own thoughts.  When we follow your ideas then we end 
up saying that consciousness is an absolutely blank state with no 
relation to anything.  That's call a reduction ad absurdum.


Brent







You want to maintain the assumption that your theory is true now by saying it 
is no longer a theory of human consciousness

Form the start is is the consciousness of the Löbian (self-referentially 
correct) entities. Only recently have I realise that non-Löbian universal 
machine are plausibly already conscious, and maximally conscious and clever. 
The human depth makes human consciousness more particular, better suited for 
survival, but more farer than the consciousness without prejudice of the 
non-Löbian entities.




but a theory to some arithmetical "consciousness" which is clearly different 
from human consciousness and only has some superficial similarities.

I have no clue what you are talking about. The “theology of number” or “the 
theology of machine”, that is the G/G*+1p-variants theology is common to all 
machine/number, and to humans in particular. The only thing which is different 
in the theologies, when we go from a simple Löbian machine like Peano 
arithmetic to an ideal correct human is in the arithmetical interpretation of 
the box, that is the box I use all the time (the beweisbar arithmetical 
predicate of Gödel):

p
[]p
[]p & p
[]p & <>t
[]p & <>t & p

For PA, “[]” can be defined in a few pages. For a human, the box “[]” would 
need a description of a human brain at the correct substitution level, which 
means a lot of pages.


And not only that it would be different pages for different persons.  
But isn't that because the human being has many limitations which do not 
follow from a few axioms?




But the theology, at the propositional level, is exactly the same for us and 
PA, but not for RA and non-Löbian universal machine(but correct, just not yet 
Löbian). I recall that Löbianity is a consequence of having enough rich 
induction axioms. RA + sigma_0 induction is not yet Löbian, for example, but RA 
+ the exponentiation axiom + sigma_0 induction is already Löbian (the weaker 
one known in the literature).

I extract physics from the theology of all machines.

No, you aspire to do so.


If my goal was human consciousness, I would not study physics nor use it to 
test computationalism. Are you telling me that all along, since 20 years you 
ascribe to me the theory that physics is a

Re: Positive AI

2018-01-29 Thread Bruno Marchal

> On 29 Jan 2018, at 01:35, Brent Meeker  wrote:
> 
> 
> 
> On 1/28/2018 6:38 AM, Bruno Marchal wrote:
>>> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
>>> 
>>> 
>>> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
 A brain, if this is confirmed, is a consciousness filter. It makes us less 
 conscious, and even less intelligent, but more efficacious in the 
 terrestrial plane.
>>> But a little interference with it and our consciousness is drastically 
>>> changed...that's how salvia and whiskey work.  A little more interference 
>>> and consciousness is gone completely (ever had a concussion?).
>> 
>> You can drastically change the human consciousness, but you cannot 
>> drastically change the universal machine consciousness which found it, no 
>> more than you can change the arithmetical relation.
> 
> But now you are violating your own empirical inferences that suggested 
> identifying human consciousness with the self-referential possibilities


That is weird Brent. 



> of modal logic applied to computation. 


First the modal logic is not applied to something, but extracted from 
something. It is just a fact that the modal logic G and G* are sound and 
complete for the logic of self-reference of self-referential correct machine. 
It is a theorem in arithmetic: self-referentially correct machines obeys to G 
and G*, and they have the 1p and 3p variant. That applies to all machines, and 
to human as far as they are self-referentially correct. That is vindicated both 
by the similarity of the machine theology with the talk of the rationalist 
mystics (Plato’s Parmenides, Moderatus of Gades, Plotinus, Proclus, etc.) and 
by the fact that it predicts quantum logics for quanta (and qualia).

Then, the human have richer content of consciousness than say, a subroutine in 
my laptop, and the richness can be handled with Bennett’s notion of depth, 
which we have, but my laptop lacks.

I do not see the violation that you are talking about. Consciousness is the 
same for all entities, in all state, but it can have different content, 
intensities, depth, etc.




> You want to maintain the assumption that your theory is true now by saying it 
> is no longer a theory of human consciousness

Form the start is is the consciousness of the Löbian (self-referentially 
correct) entities. Only recently have I realise that non-Löbian universal 
machine are plausibly already conscious, and maximally conscious and clever. 
The human depth makes human consciousness more particular, better suited for 
survival, but more farer than the consciousness without prejudice of the 
non-Löbian entities.



> but a theory to some arithmetical "consciousness" which is clearly different 
> from human consciousness and only has some superficial similarities.

I have no clue what you are talking about. The “theology of number” or “the 
theology of machine”, that is the G/G*+1p-variants theology is common to all 
machine/number, and to humans in particular. The only thing which is different 
in the theologies, when we go from a simple Löbian machine like Peano 
arithmetic to an ideal correct human is in the arithmetical interpretation of 
the box, that is the box I use all the time (the beweisbar arithmetical 
predicate of Gödel): 

p
[]p
[]p & p
[]p & <>t
[]p & <>t & p

For PA, “[]” can be defined in a few pages. For a human, the box “[]” would 
need a description of a human brain at the correct substitution level, which 
means a lot of pages.

But the theology, at the propositional level, is exactly the same for us and 
PA, but not for RA and non-Löbian universal machine(but correct, just not yet 
Löbian). I recall that Löbianity is a consequence of having enough rich 
induction axioms. RA + sigma_0 induction is not yet Löbian, for example, but RA 
+ the exponentiation axiom + sigma_0 induction is already Löbian (the weaker 
one known in the literature). 

I extract physics from the theology of all machines. If my goal was human 
consciousness, I would not study physics nor use it to test computationalism. 
Are you telling me that all along, since 20 years you ascribe to me the theory 
that physics is a product of the human mind? That is utterly ridiculous, the 
theology is the one of all Löbian machines/numbers, which includes human (when 
we assume mechanism), but mechanism does not enforce the human content and 
richness on the Lôbian “aliens” in arithmetic. 

Please, Brent, reread the papers, because you make very weird comment here, not 
justifiable in any way from any thing I have published, or say in this forum. 

Bruno






> 
> Brent
> 
>> After 10,000 glass of vodka or whiskey, it remains true that the square of 
>> an odd number is 1 plus 8 triangular numbers, and that the numbers emulates 
>> all computations, etc.
>> 
>> I don’t like that, and sometimes which that comp is false, but as Rossler 
>> sums well, with Descartes Mechanism, consciousness is a prison.
>> 
>> Bruno
>> 
>> 
>> 
>>> Brent
>>> 
>>> -- 
>>> You 

Re: Positive AI

2018-01-28 Thread Brent Meeker



On 1/28/2018 6:38 AM, Bruno Marchal wrote:

On 26 Jan 2018, at 02:49, Brent Meeker  wrote:


On 1/25/2018 4:20 AM, Bruno Marchal wrote:

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more interference and 
consciousness is gone completely (ever had a concussion?).


You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation.


But now you are violating your own empirical inferences that suggested 
identifying human consciousness with the self-referential possibilities 
of modal logic applied to computation.  You want to maintain the 
assumption that your theory is true now by saying it is no longer a 
theory of human consciousness but a theory to some arithmetical 
"consciousness" which is clearly different from human consciousness and 
only has some superficial similarities.


Brent


After 10,000 glass of vodka or whiskey, it remains true that the square of an 
odd number is 1 plus 8 triangular numbers, and that the numbers emulates all 
computations, etc.

I don’t like that, and sometimes which that comp is false, but as Rossler sums 
well, with Descartes Mechanism, consciousness is a prison.

Bruno




Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 05:53, 'cdemorse...@yahoo.com' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> 
> On Thu, Jan 25, 2018 at 5:49 PM, Brent Meeker
>  wrote:
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
> > A brain, if this is confirmed, is a consciousness filter. It makes us 
> > less conscious, and even less intelligent, but more efficacious in the 
> > terrestrial plane.
> 
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more 
> interference and consciousness is gone completely (ever had a concussion?).
> 
> Consciousness seems very emergent to me, in its nature and its dependence 
> upon a substrate (perhaps not necessarily material at the fundamental level) 
> upon which its temporal braided web of patterns can be structured, maintained 
> in focus, stored, recalled, and re-imagined. Although also an incredibly 
> noisy place (like a huge room with walls that reverb filled with people all 
> talking at once, in reference to the signal to noise ratio of the crackling 
> network of a hundred billion very chatty neurons) the brain,
OK. Like I just said to Brent, human consciousness emerges plausibly from a 
deep stories, but consciousness per se, like “universal consciousness” does not 
requires much above 2+2 = 4 & Co.



> and hence the emergent conciousness arising within the complex topography of 
> our minds is sensitive to becoming altered and even disrupted. As anyone who 
> has ever had a few too many drinks can testify. I find it interesting how the 
> brain, the highly folded physical neural cortex and the still poorly 
> understood connector and glial actors in this organ of self awareness... how 
> we experience the exquisitely serene experience of our emergent being and are 
> spared the utter cacophony that is the actual electrical manifestation of our 
> being within the wet chemistry of our brains (most of us at least, 
> schizophrenia sufferers not so lucky)

OK. But that is true for digestion and most living activities. We are sort of 
extremely complex colony of bacteria and protozoa.

Not to mention the "swarm of numbers”, from which the physical and “celestial” 
realities proceed, (always in the comp theory, assumed all along).

Bruno




> -Chris
> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com. 
> 
> To post to this group, send email to everything-list@googlegroups.com. 
> 
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 02:49, Brent Meeker  wrote:
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>> A brain, if this is confirmed, is a consciousness filter. It makes us less 
>> conscious, and even less intelligent, but more efficacious in the 
>> terrestrial plane.
> 
> But a little interference with it and our consciousness is drastically 
> changed...that's how salvia and whiskey work.  A little more interference and 
> consciousness is gone completely (ever had a concussion?).


You can drastically change the human consciousness, but you cannot drastically 
change the universal machine consciousness which found it, no more than you can 
change the arithmetical relation. After 10,000 glass of vodka or whiskey, it 
remains true that the square of an odd number is 1 plus 8 triangular numbers, 
and that the numbers emulates all computations, etc.

I don’t like that, and sometimes which that comp is false, but as Rossler sums 
well, with Descartes Mechanism, consciousness is a prison.

Bruno



> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-28 Thread Bruno Marchal

> On 26 Jan 2018, at 01:12, Brent Meeker  wrote:
> 
> 
> 
> On 1/25/2018 4:20 AM, Bruno Marchal wrote:
>>> So do you hold that one can be conscious without having learned anything - 
>>> even the lessons of evolution?
>> 
>> I guess so. Evolution would only make the universal consciousness 
>> differentiate on sophisticated rich path/histories.
>> 
>> The universal consciousness is the consciousness of the “virgin” universal 
>> machine, before any apps are implemented in it/them. It exists in virtue of 
>> the truth of *some* (non computable) arithmetical propositions.
>> 
>> I agree that this is highly counter-intuitive, and I was obliged to change 
>> my mind after some salvia experience, which becomes unexplainable without 
>> this move, but then the mathematics confirms this somehow, and makes salvia 
>> teaching closer to the consequences of the Mechanist assumption, and of some 
>> statements in neoplatonic theories.
> 
> I think if you forgot the lessons of evolution would not be able to stand up, 
> see, or hear.  You would be like a fetus.


I agree. The human type of consciousness needs (plausibly) a long and 
interesting history, in the sense of Bennett’s notion of depth.

Then the amazing thing is that the universal consciousness does not need it, 
and is more confronted with the quasi-null amount of information which defines 
the arithmetical reality. 

Human consciousness is complex and sophisticated, but universal consciousness 
seems immanent in arithmetic, and is not much than the mental first person 
attribute when there is at least one belief in one self. It is the “one” which 
can differentiate along the first person indeterminacy in arithmetic. 
Eventually, you identify yourself with what you want, but the histories you 
access might depend to the choice of that identification, and if it is too much 
precise, you might get trouble with other universal beings having incompatible 
identifications.



Bruno


> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Thu, Jan 25, 2018 at 5:49 PM, Brent Meeker wrote:   
On 1/25/2018 4:20 AM, Bruno Marchal wrote:
> A brain, if this is confirmed, is a consciousness filter. It makes us 
> less conscious, and even less intelligent, but more efficacious in the 
> terrestrial plane.

But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more 
interference and consciousness is gone completely (ever had a concussion?).
    Consciousness seems very emergent to me, in its nature and its dependence 
upon a substrate (perhaps not necessarily material at the fundamental level) 
upon which its temporal braided web of patterns can be structured, maintained 
in focus, stored, recalled, and re-imagined. Although also an incredibly noisy 
place (like a huge room with walls that reverb filled with people all talking 
at once, in reference to the signal to noise ratio of the crackling network of 
a hundred billion very chatty neurons) the brain, and hence the emergent 
conciousness arising within the complex topography of our minds is sensitive to 
becoming altered and even disrupted. As anyone who has ever had a few too many 
drinks can testify. I find it interesting how the brain, the highly folded 
physical neural cortex and the still poorly understood connector and glial 
actors in this organ of self awareness... how we experience the exquisitely 
serene experience of our emergent being and are spared the utter cacophony that 
is the actual electrical manifestation of our being within the wet chemistry of 
our brains (most of us at least, schizophrenia sufferers not so lucky)-Chris
Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Brent Meeker


On 1/25/2018 4:20 AM, Bruno Marchal wrote:
A brain, if this is confirmed, is a consciousness filter. It makes us 
less conscious, and even less intelligent, but more efficacious in the 
terrestrial plane.


But a little interference with it and our consciousness is drastically 
changed...that's how salvia and whiskey work.  A little more 
interference and consciousness is gone completely (ever had a concussion?).


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Brent Meeker



On 1/25/2018 4:20 AM, Bruno Marchal wrote:
So do you hold that one can be conscious without having learned 
anything - even the lessons of evolution?


I guess so. Evolution would only make the universal consciousness 
differentiate on sophisticated rich path/histories.


The universal consciousness is the consciousness of the “virgin” 
universal machine, before any apps are implemented in it/them. It 
exists in virtue of the truth of *some* (non computable) arithmetical 
propositions.


I agree that this is highly counter-intuitive, and I was obliged to 
change my mind after some salvia experience, which becomes 
unexplainable without this move, but then the mathematics confirms 
this somehow, and makes salvia teaching closer to the consequences of 
the Mechanist assumption, and of some statements in neoplatonic theories.


I think if you forgot the lessons of evolution would not be able to 
stand up, see, or hear.  You would be like a fetus.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-25 Thread Bruno Marchal

> On 25 Jan 2018, at 01:05, Brent Meeker  wrote:
> 
> 
> 
> On 1/24/2018 1:43 AM, Bruno Marchal wrote:
>> 
>>> On 24 Jan 2018, at 02:28, Brent Meeker >> > wrote:
>>> 
>>> 
>>> 
>>> On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com 
>>> ' via Everything List wrote:
 
> 
> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
>>> Sleep probably serves multiple and also orthogonal functions in animals.
>>> I agree as well, that on some levels it is a deep mystery.
>> 
>> It is death training, perhaps, also.
> 
> Didn't we just discuss a paper showing that one is conscious even while 
> asleep.
> 
>In lucid dream state perhaps. On the other hand if one has no memory 
> or recollection of when one was asleep can you really assign 
> consciousness to it? Can conscious be truly conscious if it is also 
> inaccessible to the subject of said consciuosness?
 
 I think that last is dualistic error.  There's no conscious being that has 
 to observe you being conscious.  I know that even when a person is not 
 dreaming, they can be awakened just by whispering their name.
 
That is interesting, but is the fact that some low level latent ability 
 to snap out of a deep sleep by hearing your name whispered really all that 
 compelling an indication of the sleeping subject being in a conscious 
 state. It could be a low level pre or sub conscious mental program that is 
 left running during sleep, which triggers the awakening of consciousness. 
 Is it necessarily an indication of consciousness? 
>>> 
>>> Well that goes back to my discussion with Bruno of levels or kinds of 
>>> consciousness.  He says that's what his hierarchies logics model.  I have 
>>> my doubts.  Simple "awareness" that allows one to not only hear a whispered 
>>> word, but also to recognize the word as important is a level of 
>>> consciousness.  Bruno may say it's self-reference, but his model of 
>>> self-reference is what one can prove about oneself. 
>> 
>> That is the 3p self-reference, but in what we discuss here, we talk about 1p 
>> self-reference. You were just confusing []p with []p & p, or with []p & <>t, 
>> or … You confuse “I have a broken teeth” and “I have a toothache”.
>> The logics model is not a matter of choice, but of taking into account the 
>> facts: all (correct) machine (enough rich) is endowed with those modal 
>> nuances of self-reference. They exist like the prime numbers exist. 
>> 
>> 
>> 
>>> That seems rather different from recognizing your name.
>> 
>> Actually no. Your names are conventional 3p associations, with some relation 
>> with the 1p, but those are learned, and not fundamental about the 
>> consciousness/unconsciousness question, I would say..
> 
> So do you hold that one can be conscious without having learned anything - 
> even the lessons of evolution?

I guess so. Evolution would only make the universal consciousness differentiate 
on sophisticated rich path/histories.

The universal consciousness is the consciousness of the “virgin” universal 
machine, before any apps are implemented in it/them. It exists in virtue of the 
truth of *some* (non computable) arithmetical propositions. 

I agree that this is highly counter-intuitive, and I was obliged to change my 
mind after some salvia experience, which becomes unexplainable without this 
move, but then the mathematics confirms this somehow, and makes salvia teaching 
closer to the consequences of the Mechanist assumption, and of some statements 
in neoplatonic theories.

Before salvia, I thought that “illumination” was a passage from the “material 
hypostases” to the primary hypostases, like the elimination of the Dt conjunct. 
But after salvia, it looks illumination is the passage from Löbianity to 
non-Löbianity: it is the remind of our possibilities when we eliminate the 
induction axioms. (The first sin, this is going to please Nelson and 
ultrafinitists!)

I have no certainty. I eventually bought the (very good but quite advanced) 
book by Hajek and Pudlack “Metamathematics of first order arithmetics” which 
dig more precisely of what happens in between RA (Q) and PA. 

The universal machine would be somehow maximally conscious, but without any 
specific knowledge and no competence at all, except potentially. It is a highly 
dissociative state of consciousness, where no 3p notions makes any sense. Then, 
by differentiating on the histories, diverses competences grow, but hides the 
origin, for some reason which I still fail to understand.

A brain, if this is confirmed, is a consciousness filter. It makes us less 
conscious, and even less intelligent, but more efficacious in the terrestrial 
plane. There are some evidences that the claustrum in the brain might be a sort 
of interface making possible for the universal person consciousness to 
differentiate, and fuse during sleep, grave accidents or

Re: Positive AI

2018-01-24 Thread Brent Meeker



On 1/24/2018 1:43 AM, Bruno Marchal wrote:


On 24 Jan 2018, at 02:28, Brent Meeker > wrote:




On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com' via Everything List wrote:





On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal
functions in animals.
I agree as well, that on some levels it is a deep
mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is
conscious even while asleep.

   In lucid dream state perhaps. On the other hand if one
has no memory or recollection of when one was asleep can
you really assign consciousness to it? Can conscious be
truly conscious if it is also inaccessible to the subject
of said consciuosness?



I think that last is dualistic error.  There's no conscious
being that has to observe you being conscious.  I know that even
when a person is not dreaming, they can be awakened just by
whispering their name.

   That is interesting, but is the fact that some low level
latent ability to snap out of a deep sleep by hearing your name
whispered really all that compelling an indication of the
sleeping subject being in a conscious state. It could be a low
level pre or sub conscious mental program that is left running
during sleep, which triggers the awakening of consciousness. Is
it necessarily an indication of consciousness?



Well that goes back to my discussion with Bruno of levels or kinds of 
consciousness.  He says that's what his hierarchies logics model.  I 
have my doubts.  Simple "awareness" that allows one to not only hear 
a whispered word, but also to recognize the word as important is a 
level of consciousness.  Bruno may say it's self-reference, but his 
model of self-reference is what one can prove about oneself.


That is the 3p self-reference, but in what we discuss here, we talk 
about 1p self-reference. You were just confusing []p with []p & p, or 
with []p & <>t, or … You confuse “I have a broken teeth” and “I have a 
toothache”.
The logics model is not a matter of choice, but of taking into account 
the facts: all (correct) machine (enough rich) is endowed with those 
modal nuances of self-reference. They exist like the prime numbers exist.





That seems rather different from recognizing your name.


Actually no. Your names are conventional 3p associations, with some 
relation with the 1p, but those are learned, and not fundamental about 
the consciousness/unconsciousness question, I would say..


So do you hold that one can be conscious without having learned anything 
- even the lessons of evolution?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 24 Jan 2018, at 02:28, Brent Meeker  wrote:
> 
> 
> 
> On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com ' 
> via Everything List wrote:
>> 
>>> 
>>> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
> Sleep probably serves multiple and also orthogonal functions in animals.
> I agree as well, that on some levels it is a deep mystery.
 
 It is death training, perhaps, also.
>>> 
>>> Didn't we just discuss a paper showing that one is conscious even while 
>>> asleep.
>>> 
>>>In lucid dream state perhaps. On the other hand if one has no memory or 
>>> recollection of when one was asleep can you really assign consciousness to 
>>> it? Can conscious be truly conscious if it is also inaccessible to the 
>>> subject of said consciuosness?
>> 
>> I think that last is dualistic error.  There's no conscious being that has 
>> to observe you being conscious.  I know that even when a person is not 
>> dreaming, they can be awakened just by whispering their name.
>> 
>>That is interesting, but is the fact that some low level latent ability 
>> to snap out of a deep sleep by hearing your name whispered really all that 
>> compelling an indication of the sleeping subject being in a conscious state. 
>> It could be a low level pre or sub conscious mental program that is left 
>> running during sleep, which triggers the awakening of consciousness. Is it 
>> necessarily an indication of consciousness? 
> 
> Well that goes back to my discussion with Bruno of levels or kinds of 
> consciousness.  He says that's what his hierarchies logics model.  I have my 
> doubts.  Simple "awareness" that allows one to not only hear a whispered 
> word, but also to recognize the word as important is a level of 
> consciousness.  Bruno may say it's self-reference, but his model of 
> self-reference is what one can prove about oneself. 

That is the 3p self-reference, but in what we discuss here, we talk about 1p 
self-reference. You were just confusing []p with []p & p, or with []p & <>t, or 
… You confuse “I have a broken teeth” and “I have a toothache”.
The logics model is not a matter of choice, but of taking into account the 
facts: all (correct) machine (enough rich) is endowed with those modal nuances 
of self-reference. They exist like the prime numbers exist. 



> That seems rather different from recognizing your name.

Actually no. Your names are conventional 3p associations, with some relation 
with the 1p, but those are learned, and not fundamental about the 
consciousness/unconsciousness question, I would say..

Bruno

> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 23 Jan 2018, at 07:00, Brent Meeker  wrote:
> 
> 
> 
> On 1/22/2018 4:58 PM, Bruno Marchal wrote:
>>> Sleep probably serves multiple and also orthogonal functions in animals.
>>> I agree as well, that on some levels it is a deep mystery.
>> 
>> It is death training, perhaps, also.
> 
> Didn't we just discuss a paper showing that one is conscious even while 
> asleep.

Yes. I did not say that “unconsciousness exist”, but that there is a mechanism 
making us believe it can exist, so that we can “fear” the predators.

Bruno



> 
> Brent
> 
>> Or the building of the illusion we could not be, to build some sense of 
>> life, the amnesia of other life, to get an identity and preserve it against 
>> the prey—nature argument per authority ? I am thinking aloud …
>> 
>> Bruno
>> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to everything-list@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list 
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-24 Thread Bruno Marchal

> On 23 Jan 2018, at 06:02, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
> On Mon, Jan 22, 2018 at 4:58 PM, Bruno Marchal
>  wrote:
> 
>> On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> 
>> 
>> 
>> Sent from Yahoo Mail on Android 
>> <https://overview.mail.yahoo.com/mobile/?.src=Android>
>> On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal
>> mailto:marc...@ulb.ac.be>> wrote:
>> 
>>> On 17 Jan 2018, at 21:12, Brent Meeker >> <mailto:meeke...@verizon.net>> wrote:
>>> 
>> 
>> 
>> 
>> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
>> 
>> 
>>> On 16 Jan 2018, at 14:29, K E N O >> <mailto:lucky@kenokeno.bingo>> wrote:
>>> 
>> 
>> Oh, no! As an media art student, I don’t believe in strict rules oft 
>> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
>> approach to get unusual thoughts from everything.
>> Maybe I should rephrase my question: What is the craziest AI application you 
>> can think of?
>> 
>> 
>> A long time ago, when “AI” was just an object of mockery, I saw a public 
>> challenge, and the winner was a proposition to make a tiny robot that you 
>> place on your head, capable of cutting the hairs "au fur et à mesure”.
>> 
>> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
>> term. It illustrates the human super-ego character. When machines will be 
>> really intelligent, they will ask for having better users, and social 
>> security. When they will be as clever as us, they will do war and demolish 
>> the planet, I guess.
>> 
>> Minski is right. We can be happy if tomorrow the machine will have humans as 
>> pets …
>> 
>> There is also a confusion between competence and intelligence. With higher 
>> competences we become more efficacious in doing our usual stupidities ...
>> 
>> So do you think that competence entails intelligence which entails 
>> consciousness?
>> 
>> Competence makes intelligence sleepy. And intelligence requires 
>> consciousness.
>> 
>> It is a bit like:
>> 
>> Consciousness ==> intelligence ==> competence ==> stupidity
>> 
>> 
>> 
>>> 
>>> There have been recent discoveries about sleep in animals.  Apparently ALL 
>>> animals need sleep, even jellyfish.  But, there is no really good theory of 
>>> why.  I wonder if your theory can throw any light on this?  I don't think 
>>> there's anything analogous for computers...but maybe if they were 
>>> intelligent and interacted with their environment they would be.
>> 
>> I can only speculate here. Sleep might be needed to “reconstruct the 
>> dekstop” or something. My older computer makes a 5m nap every 20 minutes! In 
>> higher mammals, I think that sleep allows dreams, which allows some training 
>> of the mind, (re)evaluation of past events, etc. But sleep remains still 
>> very mysterious. Maybe it is the time to get back to heaven, but then we 
>> can’t remember it, … Don’t take this not too much seriously.
>> 
>> Bruno
>> 
>> 
>> One effect of sleep is that apparently, during the quiescence of sleep 
>> neurons, and many kinds of glial cells as well (if I recall) shrink somewhat 
>> in size. This opens up trillions of capillary interstitial passages, a hyper 
>> fine grained capillary network through which toxins can be flushed out and 
>> carried off from the brain. An interesting mechanism for the last-mile 
>> (metaphorically speaking) nanoscale trash collection that is vital to long 
>> term viability of a complex highly metabolizing organ such as a brain. Sleep 
>> enables the flushing out of toxic by-products from the vast 3D densely 
>> packed hot spot of cellular metabolism comprising neural tissue.
> Interesting. 
> 
> 
>> 
>> Sleep probably serves multiple and also orthogonal functions in animals.
>> I agree as well, that on some levels it is a deep mystery.
> 
> It is death training, perhaps, also.
> 
>   Interesting speculation, there... One could say that,
>   deep sleep is the little death, we die each night.
> 
>  Pure, unadulterated speculation here: Deep sleep could be the unquestioned 
> and accepted by us, time window for an unknown occult process (unsensed by 
> us, or by our own

Re: Positive AI

2018-01-23 Thread Brent Meeker



On 1/23/2018 3:18 PM, 'cdemorse...@yahoo.com' via Everything List wrote:





On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal
functions in animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious
even while asleep.

 In lucid dream state perhaps. On the other hand if one has
no memory or recollection of when one was asleep can you
really assign consciousness to it? Can conscious be truly
conscious if it is also inaccessible to the subject of said
consciuosness?



I think that last is dualistic error.  There's no conscious being
that has to observe you being conscious. I know that even when a
person is not dreaming, they can be awakened just by whispering
their name.

   That is interesting, but is the fact that some low level latent
ability to snap out of a deep sleep by hearing your name whispered
really all that compelling an indication of the sleeping subject
being in a conscious state. It could be a low level pre or sub
conscious mental program that is left running during sleep, which
triggers the awakening of consciousness. Is it necessarily an
indication of consciousness?



Well that goes back to my discussion with Bruno of levels or kinds of 
consciousness.  He says that's what his hierarchies logics model.  I 
have my doubts.  Simple "awareness" that allows one to not only hear a 
whispered word, but also to recognize the word as important is a level 
of consciousness.  Bruno may say it's self-reference, but his model of 
self-reference is what one can prove about oneself.  That seems rather 
different from recognizing your name.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-23 Thread 'cdemorse...@yahoo.com' via Everything List




 
 On 1/22/2018 4:58 PM, Bruno Marchal wrote:
  
 
  
Sleep probably serves multiple and also orthogonal functions in animals. I 
agree as well, that on some levels it is a deep mystery.
  
 
  It is death training, perhaps, also.  
 
 Didn't we just discuss a paper showing that one is conscious even while 
asleep. 
     In lucid dream state perhaps. On the other hand if one has no memory or 
recollection of when one was asleep can you really assign consciousness to it? 
Can conscious be truly conscious if it is also inaccessible to the subject of 
said consciuosness?   
 
 
 I think that last is dualistic error.  There's no conscious being that has to 
observe you being conscious.  I know that even when a person is not dreaming, 
they can be awakened just by whispering their name.
   That is interesting, but is the fact that some low level latent ability to 
snap out of a deep sleep by hearing your name whispered really all that 
compelling an indication of the sleeping subject being in a conscious state. It 
could be a low level pre or sub conscious mental program that is left running 
during sleep, which triggers the awakening of consciousness. Is it necessarily 
an indication of consciousness? -Chris
 
 Brent
 
 
 
-Chris
 
 Brent
 
 
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud … 
  Bruno 
  
 
 -- 
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at https://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
 To unsubscribe from this group and stop receiving emails from it, send an 
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at https://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.
 
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-23 Thread Brent Meeker



On 1/22/2018 11:24 PM, 'cdemorse...@yahoo.com' via Everything List wrote:



Sent from Yahoo Mail on Android 



On Mon, Jan 22, 2018 at 10:00 PM, Brent Meeker
 wrote:



On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal functions
in animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious even
while asleep.

   In lucid dream state perhaps. On the other hand if one has no
memory or recollection of when one was asleep can you really
assign consciousness to it? Can conscious be truly conscious if it
is also inaccessible to the subject of said consciuosness?



I think that last is dualistic error.  There's no conscious being that 
has to observe you being conscious.  I know that even when a person is 
not dreaming, they can be awakened just by whispering their name.


Brent


-Chris

Brent


Or the building of the illusion we could not be, to build some
sense of life, the amnesia of other life, to get an identity and
preserve it against the prey—nature argument per authority ? I am
thinking aloud …

Bruno



-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
.
To post to this group, send email to
everything-list@googlegroups.com
.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 22, 2018 at 10:00 PM, Brent Meeker wrote:   
 
 
 On 1/22/2018 4:58 PM, Bruno Marchal wrote:
  
 
  
Sleep probably serves multiple and also orthogonal functions in animals. I 
agree as well, that on some levels it is a deep mystery.
  
 
  It is death training, perhaps, also.  
 
 Didn't we just discuss a paper showing that one is conscious even while asleep.
   In lucid dream state perhaps. On the other hand if one has no memory or 
recollection of when one was asleep can you really assign consciousness to it? 
Can conscious be truly conscious if it is also inaccessible to the subject of 
said consciuosness?-Chris
 
 Brent
 
 
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud … 
  Bruno 
  
 
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread Brent Meeker



On 1/22/2018 4:58 PM, Bruno Marchal wrote:


Sleep probably serves multiple and also orthogonal functions in
animals.
I agree as well, that on some levels it is a deep mystery.



It is death training, perhaps, also.


Didn't we just discuss a paper showing that one is conscious even while 
asleep.


Brent

Or the building of the illusion we could not be, to build some sense 
of life, the amnesia of other life, to get an identity and preserve it 
against the prey—nature argument per authority ? I am thinking aloud …


Bruno



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-22 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 22, 2018 at 4:58 PM, Bruno Marchal wrote:   

On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
 wrote:


Sent from Yahoo Mail on Android

On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal wrote:

On 17 Jan 2018, at 21:12, Brent Meeker  wrote:



On 1/17/2018 12:57 AM, Bruno Marchal wrote:




On 16 Jan 2018, at 14:29, K E N O  wrote:

Oh, no! As an media art student, I don’t believe in strict rules oft usefulness 
(of course!). It was a rather suggestive or maybe even sarcastic approach to 
get unusual thoughts from everything.Maybe I should rephrase my question: What 
is the craziest AI application you can think of?

A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”.
“AI” is a terrible naming term. “Artificial” is itself a very artificial term. 
It illustrates the human super-ego character. When machines will be really 
intelligent, they will ask for having better users, and social security. When 
they will be as clever as us, they will do war and demolish the planet, I guess.
Minski is right. We can be happy if tomorrow the machine will have humans as 
pets …
There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...

So do you think that competence entails intelligence which entails 
consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.
It is a bit like:
Consciousness ==> intelligence ==> competence ==> stupidity




There have been recent discoveries about sleep in animals.  Apparently ALL 
animals need sleep, even jellyfish.  But, there is no really good theory of 
why.  I wonder if your theory can throw any light on this?  I don't think 
there's anything analogous for computers...but maybe if they were intelligent 
and interacted with their environment they would be.


I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.
Bruno

One effect of sleep is that apparently, during the quiescence of sleep neurons, 
and many kinds of glial cells as well (if I recall) shrink somewhat in size. 
This opens up trillions of capillary interstitial passages, a hyper fine 
grained capillary network through which toxins can be flushed out and carried 
off from the brain. An interesting mechanism for the last-mile (metaphorically 
speaking) nanoscale trash collection that is vital to long term viability of a 
complex highly metabolizing organ such as a brain. Sleep enables the flushing 
out of toxic by-products from the vast 3D densely packed hot spot of cellular 
metabolism comprising neural tissue.

Interesting. 




Sleep probably serves multiple and also orthogonal functions in animals.I agree 
as well, that on some levels it is a deep mystery.


It is death training, perhaps, also.
  Interesting speculation, there... One could say that,  deep sleep is the 
little death, we die each night.
 Pure, unadulterated speculation here: Deep sleep could be the unquestioned and 
accepted by us, time window for an unknown occult process (unsensed by us, or 
by our own sense of continuity and being), a nightly subtle, re-programming of 
our own mind's source code, by unseen operators (or their intelligent agents), 
who do so for some unknown reason... and with unknown effort. It could be a 
simple act or wish, resulting in a cascade of consequences and intermediated 
directed outcomes. Damn, I sure hope not, though...   :)-Chris
 Or the building of the illusion we could not be, to build some sense of life, 
the amnesia of other life, to get an identity and preserve it against the 
prey—nature argument per authority ? I am thinking aloud …
Bruno





-Chris


Brent

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to

Re: Positive AI

2018-01-22 Thread Bruno Marchal

> On 19 Jan 2018, at 06:22, 'Chris de Morsella' via Everything List 
>  wrote:
> 
> 
> 
> Sent from Yahoo Mail on Android 
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
> On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal
> mailto:marc...@ulb.ac.be>> wrote:
> 
>> On 17 Jan 2018, at 21:12, Brent Meeker > <mailto:meeke...@verizon.net>> wrote:
>> 
> 
> 
> 
> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
> 
> 
>> On 16 Jan 2018, at 14:29, K E N O > <mailto:lucky@kenokeno.bingo>> wrote:
>> 
> 
> Oh, no! As an media art student, I don’t believe in strict rules oft 
> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
> approach to get unusual thoughts from everything.
> Maybe I should rephrase my question: What is the craziest AI application you 
> can think of?
> 
> 
> A long time ago, when “AI” was just an object of mockery, I saw a public 
> challenge, and the winner was a proposition to make a tiny robot that you 
> place on your head, capable of cutting the hairs "au fur et à mesure”.
> 
> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
> term. It illustrates the human super-ego character. When machines will be 
> really intelligent, they will ask for having better users, and social 
> security. When they will be as clever as us, they will do war and demolish 
> the planet, I guess.
> 
> Minski is right. We can be happy if tomorrow the machine will have humans as 
> pets …
> 
> There is also a confusion between competence and intelligence. With higher 
> competences we become more efficacious in doing our usual stupidities ...
> 
> So do you think that competence entails intelligence which entails 
> consciousness?
> 
> Competence makes intelligence sleepy. And intelligence requires consciousness.
> 
> It is a bit like:
> 
> Consciousness ==> intelligence ==> competence ==> stupidity
> 
> 
> 
>> 
>> There have been recent discoveries about sleep in animals.  Apparently ALL 
>> animals need sleep, even jellyfish.  But, there is no really good theory of 
>> why.  I wonder if your theory can throw any light on this?  I don't think 
>> there's anything analogous for computers...but maybe if they were 
>> intelligent and interacted with their environment they would be.
> 
> I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
> or something. My older computer makes a 5m nap every 20 minutes! In higher 
> mammals, I think that sleep allows dreams, which allows some training of the 
> mind, (re)evaluation of past events, etc. But sleep remains still very 
> mysterious. Maybe it is the time to get back to heaven, but then we can’t 
> remember it, … Don’t take this not too much seriously.
> 
> Bruno
> 
> 
> One effect of sleep is that apparently, during the quiescence of sleep 
> neurons, and many kinds of glial cells as well (if I recall) shrink somewhat 
> in size. This opens up trillions of capillary interstitial passages, a hyper 
> fine grained capillary network through which toxins can be flushed out and 
> carried off from the brain. An interesting mechanism for the last-mile 
> (metaphorically speaking) nanoscale trash collection that is vital to long 
> term viability of a complex highly metabolizing organ such as a brain. Sleep 
> enables the flushing out of toxic by-products from the vast 3D densely packed 
> hot spot of cellular metabolism comprising neural tissue.
Interesting. 


> 
> Sleep probably serves multiple and also orthogonal functions in animals.
> I agree as well, that on some levels it is a deep mystery.

It is death training, perhaps, also. Or the building of the illusion we could 
not be, to build some sense of life, the amnesia of other life, to get an 
identity and preserve it against the prey—nature argument per authority ? I am 
thinking aloud …

Bruno



> 
> -Chris
> 
>> 
>> Brent
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-list+unsubscr...@googlegroups.com 
>> <mailto:everything-list+unsubscr...@googlegroups.com>.
>> To post to this group, send email to everything-list@googlegroups.com 
>> <mailto:everything-list@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/everything-list 
>> <https://groups.google.com/group/everything-list>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/opt

Re: Positive AI

2018-01-18 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Thu, Jan 18, 2018 at 6:01 AM, Bruno Marchal wrote:   

On 17 Jan 2018, at 21:12, Brent Meeker  wrote:
 
 
 
 On 1/17/2018 12:57 AM, Bruno Marchal wrote:
  
 

  
 On 16 Jan 2018, at 14:29, K E N O  wrote: 
  
  Oh, no! As an media art student, I don’t believe in strict rules oft 
usefulness (of course!). It  was a rather suggestive or maybe even sarcastic 
approach to get unusual thoughts from everything. Maybe I should rephrase my 
question: What is the craziest AI application you can think of?
  
  A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”. 
  “AI” is a terrible naming term. “Artificial” is itself a very artificial 
term. It illustrates the human super-ego character. When machines will be 
really intelligent, they will ask for having better users, and social security. 
When they will be as clever as us, they will do war and demolish the planet, I 
guess. 
  Minski is right. We can be happy if tomorrow the machine will have humans as 
pets … 
  There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...
  
 So do you think that competence entails intelligence which entails 
consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.
It is a bit like:
Consciousness ==> intelligence ==> competence ==> stupidity



 
 There have been recent discoveries about sleep in animals.  Apparently ALL 
animals need sleep, even jellyfish.  But, there is no really good theory of 
why.  I wonder if your theory can throw any light on this?  I don't think 
there's anything analogous for computers...but maybe if they were intelligent 
and interacted with their environment they would be.


I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.
Bruno

One effect of sleep is that apparently, during the quiescence of sleep neurons, 
and many kinds of glial cells as well (if I recall) shrink somewhat in size. 
This opens up trillions of capillary interstitial passages, a hyper fine 
grained capillary network through which toxins can be flushed out and carried 
off from the brain. An interesting mechanism for the last-mile (metaphorically 
speaking) nanoscale trash collection that is vital to long term viability of a 
complex highly metabolizing organ such as a brain. Sleep enables the flushing 
out of toxic by-products from the vast 3D densely packed hot spot of cellular 
metabolism comprising neural tissue.
Sleep probably serves multiple and also orthogonal functions in animals.I agree 
as well, that on some levels it is a deep mystery.
-Chris

 
 Brent
 
-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-18 Thread 'cdemorse...@yahoo.com' via Everything List


Sent from Yahoo Mail on Android 
 
  On Wed, Jan 17, 2018 at 12:04 PM, Brent Meeker wrote:   
 
 
 On 1/16/2018 11:54 PM, 'Chris de Morsella' via Everything List wrote:
  

 
 Sent from Yahoo Mail on Android 
 
  On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker  wrote:   
  
 
 On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:
  
 --What is the craziest AI application you can think of? 
  A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others. So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks  dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:) 
 
 Of course teaching the AI requires lots of training examples, so you will need 
people to translate what their dog is saying to create the training examples.  
Google will probably try to get people to do this online, similar to the way 
they got visual identification training examples.  But the really interesting 
point is that not only do people understand dogs, it's also the case that dogs 
understand people.  So when Google's dog->human translate says, "Fido says the 
mailman is here." will Fido be able to listen to that and say, "Rowf" -> 
"That's right."? 
  Brent
  
  

 We might not want to always hear what our animals are saying about us behind 
our backs... I see a potential law suit hehe  :) 
  I believe, only half joking here... that a training set already exists 
somewhat in the public domain. In the ever growing historical repository 
comprised of all those pet videos uploaded online, and that dataset probably 
contains vast numbers of clips of people trying to understand their  pet 
vocalizations as well as dogs (and to a lesser degree more aloof cats) 
listening intently to what their people are saying. In fact I bet that a 
substantial body of raw video feed exists even for more exotic 
human-other-species interactions... say parrots... tegu lizards perhaps...  
cute little rodents.. gold fish... tarantulas... you name it. A vast body of 
historical feed already exists. 

 
 
 If we use that Google translations will turn all dogs into standup comedians.  
:-)
 
 Brent
 
That would be a case of over-fitting on biased data. 😊
-Chris

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-18 Thread Bruno Marchal

> On 17 Jan 2018, at 21:12, Brent Meeker  wrote:
> 
> 
> 
> On 1/17/2018 12:57 AM, Bruno Marchal wrote:
>> 
>>> On 16 Jan 2018, at 14:29, K E N O >> <mailto:lucky@kenokeno.bingo>> wrote:
>>> 
>>> Oh, no! As an media art student, I don’t believe in strict rules oft 
>>> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
>>> approach to get unusual thoughts from everything.
>>> Maybe I should rephrase my question: What is the craziest AI application 
>>> you can think of?
>> 
>> 
>> A long time ago, when “AI” was just an object of mockery, I saw a public 
>> challenge, and the winner was a proposition to make a tiny robot that you 
>> place on your head, capable of cutting the hairs "au fur et à mesure”.
>> 
>> “AI” is a terrible naming term. “Artificial” is itself a very artificial 
>> term. It illustrates the human super-ego character. When machines will be 
>> really intelligent, they will ask for having better users, and social 
>> security. When they will be as clever as us, they will do war and demolish 
>> the planet, I guess.
>> 
>> Minski is right. We can be happy if tomorrow the machine will have humans as 
>> pets …
>> 
>> There is also a confusion between competence and intelligence. With higher 
>> competences we become more efficacious in doing our usual stupidities ...
> 
> So do you think that competence entails intelligence which entails 
> consciousness?

Competence makes intelligence sleepy. And intelligence requires consciousness.

It is a bit like:

Consciousness ==> intelligence ==> competence ==> stupidity



> 
> There have been recent discoveries about sleep in animals.  Apparently ALL 
> animals need sleep, even jellyfish.  But, there is no really good theory of 
> why.  I wonder if your theory can throw any light on this?  I don't think 
> there's anything analogous for computers...but maybe if they were intelligent 
> and interacted with their environment they would be.

I can only speculate here. Sleep might be needed to “reconstruct the dekstop” 
or something. My older computer makes a 5m nap every 20 minutes! In higher 
mammals, I think that sleep allows dreams, which allows some training of the 
mind, (re)evaluation of past events, etc. But sleep remains still very 
mysterious. Maybe it is the time to get back to heaven, but then we can’t 
remember it, … Don’t take this not too much seriously.

Bruno




> 
> Brent
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Brent Meeker



On 1/17/2018 12:57 AM, Bruno Marchal wrote:


On 16 Jan 2018, at 14:29, K E N O <mailto:lucky@kenokeno.bingo>> wrote:


Oh, no! As an media art student, I don’t believe in strict rules oft 
usefulness (of course!). It was a rather suggestive or maybe even 
sarcastic approach to get unusual thoughts from /everything/.
Maybe I should rephrase my question: What is the craziest AI 
application you can think of?



A long time ago, when “AI” was just an object of mockery, I saw a 
public challenge, and the winner was a proposition to make a tiny 
robot that you place on your head, capable of cutting the hairs "au 
fur et à mesure”.


“AI” is a terrible naming term. “Artificial” is itself a very 
artificial term. It illustrates the human super-ego character. When 
machines will be really intelligent, they will ask for having better 
users, and social security. When they will be as clever as us, they 
will do war and demolish the planet, I guess.


Minski is right. We can be happy if tomorrow the machine will have 
humans as pets …


There is also a confusion between competence and intelligence. With 
higher competences we become more efficacious in doing our usual 
stupidities ...


So do you think that competence entails intelligence which entails 
consciousness?


There have been recent discoveries about sleep in animals. Apparently 
ALL animals need sleep, even jellyfish.  But, there is no really good 
theory of why.  I wonder if your theory can throw any light on this?  I 
don't think there's anything analogous for computers...but maybe if they 
were intelligent and interacted with their environment they would be.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Brent Meeker



On 1/16/2018 11:54 PM, 'Chris de Morsella' via Everything List wrote:



Sent from Yahoo Mail on Android 
<https://overview.mail.yahoo.com/mobile/?.src=Android>


On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker
 wrote:


On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:

    --What is the craziest AI application you can think of?

A machine learned pet translator perhaps... they're actually
working on that app, Amazon amongst others.
So, it seems the big players Google as well, are running in that
race... think of the potential market of pet owners forking over
their hard earned money to hear what the Google machine is
telling them their dog is telling them. I can imagine the
marketing folks dreaming about that market. As an aside also a
commentary on how out of touch, we humans have become from the
world in which we exist. People already understand dog language :)


Of course teaching the AI requires lots of training examples, so
you will need people to translate what their dog is saying to
create the training examples.  Google will probably try to get
people to do this online, similar to the way they got visual
identification training examples.  But the really interesting
point is that not only do people understand dogs, it's also the
case that dogs understand people.  So when Google's dog->human
translate says, "Fido says the mailman is here." will Fido be able
to listen to that and say, "Rowf" -> "That's right."?

Brent



We might not want to always hear what our animals are saying about
us behind our backs... I see a potential law suit hehe  :)

I believe, only half joking here... that a training set already
exists somewhat in the public domain. In the ever growing
historical repository comprised of all those pet videos uploaded
online, and that dataset probably contains vast numbers of clips
of people trying to understand their pet vocalizations as well as
dogs (and to a lesser degree more aloof cats) listening intently
to what their people are saying. In fact I bet that a substantial
body of raw video feed exists even for more exotic
human-other-species interactions... say parrots... tegu lizards
perhaps... cute little rodents.. gold fish... tarantulas... you
name it.
A vast body of historical feed already exists.



If we use that Google translations will turn all dogs into standup 
comedians.  :-)


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-17 Thread Bruno Marchal

> On 16 Jan 2018, at 14:29, K E N O  wrote:
> 
> Oh, no! As an media art student, I don’t believe in strict rules oft 
> usefulness (of course!). It was a rather suggestive or maybe even sarcastic 
> approach to get unusual thoughts from everything.
> Maybe I should rephrase my question: What is the craziest AI application you 
> can think of?


A long time ago, when “AI” was just an object of mockery, I saw a public 
challenge, and the winner was a proposition to make a tiny robot that you place 
on your head, capable of cutting the hairs "au fur et à mesure”.

“AI” is a terrible naming term. “Artificial” is itself a very artificial term. 
It illustrates the human super-ego character. When machines will be really 
intelligent, they will ask for having better users, and social security. When 
they will be as clever as us, they will do war and demolish the planet, I guess.

Minski is right. We can be happy if tomorrow the machine will have humans as 
pets …

There is also a confusion between competence and intelligence. With higher 
competences we become more efficacious in doing our usual stupidities ...

Best,

Bruno







> 
> K E N O
> 
>> Are you suggesting that fun is useless? 
>> 
>> I can agree that the idea that fun has some use is not much funny, but that 
>> does not make it false.
>> 
>> “Useful” is quite relative, also. Flies have no use of spider webs.
>> 
>> Bruno
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> <mailto:everything-list+unsubscr...@googlegroups.com>.
> To post to this group, send email to everything-list@googlegroups.com 
> <mailto:everything-list@googlegroups.com>.
> Visit this group at https://groups.google.com/group/everything-list 
> <https://groups.google.com/group/everything-list>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Tue, Jan 16, 2018 at 9:19 PM, Brent Meeker wrote:
 
 On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:
  
 --What is the craziest AI application you can think of? 
  A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others. So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:) 
 
 Of course teaching the AI requires lots of training examples, so you will need 
people to translate what their dog is saying to create the training examples.  
Google will probably try to get people to do this online, similar to the way 
they got visual identification training examples.  But the really interesting 
point is that not only do people understand dogs, it's also the case that dogs 
understand people.  So when Google's dog->human translate says, "Fido says the 
mailman is here." will Fido be able to listen to that and say, "Rowf" -> 
"That's right."?
Brent

We
 might not want to always hear what our animals are saying about us behind our 
backs... I see a potential law suit hehe  :)
I believe, only half joking here... that a training set already exists somewhat 
in the public domain. In the ever growing historical repository comprised of 
all those pet videos uploaded online, and that dataset probably contains vast 
numbers of clips of people trying to understand their pet vocalizations as well 
as dogs (and to a lesser degree more aloof cats) listening intently to what 
their people are saying. In fact I bet that a substantial body of raw video 
feed exists even for more exotic human-other-species interactions... say 
parrots... tegu lizards perhaps... cute little rodents.. gold fish... 
tarantulas... you name it.A vast body of historical feed already exists. 
The raw dataset would need to be cleaned, normalized, meta-described of course, 
but heck there's machine learned systems that are even now getting pretty good 
at parsing video stream data for some Darwinian evolved desired outcome, which 
in this case would be to select out from the vast available but of spotty 
value... those spots of value in the vast desert of cute pet video sameness.
Machine learned systems, becoming applied to evolving other machine learned 
systems, is a self accelerating process. 
Machine learning techniques can be applied to the entire pipeline of distinct 
activities. Each granular step along the arc of information driven self 
learning systems, from data sourcing, location etc., to actual retrieval (can 
in practice be a huge headache, road block), normalization, formatting, 
technical signal processing etc. On to activities such as meta-mining, symbolic 
tagging & categorization, indexing etc. To the actual preparation of 
experiments training and test sets. 
Each of those granular activities, and many others as well not mentioned in 
that off the cuff data pipeline can represent significant work, pose real 
challenges. The whole long chain of activities that must occur even before an 
experiment can begin has historically strangled the process somewhere along the 
chain. It is slow hard work... it has historically been a hard nut to crack. 
This is changing, and rapidly so, as each of these specialized activities, 
which have in the past been potential bottlenecks becomes amenable to being 
automatically ingested at near real time speeds by machine learned systems. For 
example to tag and quantify correlating data, (an important activity in 
preparing machine learned datasets to squeeze out as much signal as possible, 
while minimizing the geometric explosion of over all uncertainty arising from 
the introduced error from having too many dimensions that either duplicate (are 
highly correlated), or do not contain any appreciable useful signal - but do 
introduce potential bias, error etc.) Bucketization/classification of data is 
another typixal example. 
What used to be laborious and hence slow is increasingly being performed at 
impressive rates. And by this, I intend the quite extensive array of 
specialized activities as well as the web of pipelines between them (e.g. the 
bus, as it is often called... and the queue/repository-cache based architecture 
underpinning these things) All of it is now not only becoming automatically 
processed, but the processing rate is becoming both more hi-if and also much 
faster.
The cost of getting high quality, clean datasets out of raw da

Re: Positive AI

2018-01-16 Thread Brent Meeker



On 1/16/2018 8:55 PM, 'Chris de Morsella' via Everything List wrote:

--What is the craziest AI application you can think of?

A machine learned pet translator perhaps... they're actually working 
on that app, Amazon amongst others.
So, it seems the big players Google as well, are running in that 
race... think of the potential market of pet owners forking over their 
hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming 
about that market. As an aside also a commentary on how out of touch, 
we humans have become from the world in which we exist. People already 
understand dog language :)


Of course teaching the AI requires lots of training examples, so you 
will need people to translate what their dog is saying to create the 
training examples.  Google will probably try to get people to do this 
online, similar to the way they got visual identification training 
examples.  But the really interesting point is that not only do people 
understand dogs, it's also the case that dogs understand people.  So 
when Google's dog->human translate says, "Fido says the mailman is 
here." will Fido be able to listen to that and say, "Rowf" -> "That's 
right."?


Brent



What I think would be a wild application of machine learned systems is 
in tackling the decoding/deciphering of lost ancient human languages 
and record keeping systems (such as the Inca knotted strings for example).
Wouldn't that be cool... AI helping us humans learn about our own lost 
cultural heritage.


-Chris

On Tue, Jan 16, 2018 at 5:29 AM, K E N O
 wrote:
Oh, no! As an media art student, I don’t believe in strict rules
oft usefulness (of course!). It was a rather suggestive or maybe
even sarcastic approach to get unusual thoughts from /everything/.
Maybe I should rephrase my question: What is the craziest AI
application you can think of?

K E N O


Are you suggesting that fun is useless?

I can agree that the idea that fun has some use is not much
funny, but that does not make it false.

“Useful” is quite relative, also. Flies have no use of spider webs.

Bruno
-- 
You received this message because you are subscribed to the Google

Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to
everything-list@googlegroups.com
<mailto:everything-list@googlegroups.com>.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread 'Chris de Morsella' via Everything List
--What is the craziest AI application you can think of?
A machine learned pet translator perhaps... they're actually working on that 
app, Amazon amongst others.So, it seems the big players Google as well, are 
running in that race... think of the potential market of pet owners forking 
over their hard earned money to hear what the Google machine is telling them 
their dog is telling them. I can imagine the marketing folks dreaming about 
that market. As an aside also a commentary on how out of touch, we humans have 
become from the world in which we exist. People already understand dog language 
:)
What I think would be a wild application of machine learned systems is in 
tackling the decoding/deciphering of lost ancient human languages and record 
keeping systems (such as the Inca knotted strings for example).Wouldn't that be 
cool... AI helping us humans learn about our own lost cultural heritage.
-Chris 
 
  On Tue, Jan 16, 2018 at 5:29 AM, K E N O wrote:   Oh, 
no! As an media art student, I don’t believe in strict rules oft usefulness (of 
course!). It was a rather suggestive or maybe even sarcastic approach to get 
unusual thoughts from everything.Maybe I should rephrase my question: What is 
the craziest AI application you can think of?
K E N O

Are you suggesting that fun is useless? 
I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.
“Useful” is quite relative, also. Flies have no use of spider webs.
Bruno


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-16 Thread K E N O
Oh, no! As an media art student, I don’t believe in strict rules oft usefulness 
(of course!). It was a rather suggestive or maybe even sarcastic approach to 
get unusual thoughts from everything.
Maybe I should rephrase my question: What is the craziest AI application you 
can think of?

K E N O

> Are you suggesting that fun is useless? 
> 
> I can agree that the idea that fun has some use is not much funny, but that 
> does not make it false.
> 
> “Useful” is quite relative, also. Flies have no use of spider webs.
> 
> Bruno

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Positive AI

2018-01-15 Thread 'Chris de Morsella' via Everything List


Sent from Yahoo Mail on Android 
 
  On Mon, Jan 15, 2018 at 9:12 AM, Bruno Marchal wrote:   

On 12 Jan 2018, at 20:48, K E N O  wrote:
Nice! Can you imagine something totally useless as an application of AI? What 
would you creative if you just wanted to have fun with AI?


Are you suggesting that fun is useless? 
I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.
“Useful” is quite relative, also. Flies have no use of spider webs.
Bruno
Lol... no they do not, but spiders have a use for flies. Usefulness is a 
paradox. A deadly poison can be useful, not only to kill, but often as a life 
saving medicine. Many people who are alive today are alive because they were 
poisoned with medically calibrated doses.As you said "useful" is a highly 
subjective and relative term. It is shall we say highly entangled with the 
subject and the object. It is hard to speak of "usefulness", in fact without 
making reference to some relative and/or subjective highly entangled context.
-Chris




K E N O


Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
Hi Lara,

My view is that, as with all scientific theories and technologies, AI
is morally neutral. it has the potential for both extremely good and
extremely nasty practical applications. That being said, the unusual
thing about AI is that it has the potential to generate *something
that replaces us*. Some people say that it could happen in the next
few decades, some people say that it will never happen. I don't think
anyone knows.

Leaving that more crazy question aside, and focusing on your question
in relation to what can be done with AI right now: I think that the
negativity that currently surrounds the technology says more about our
species and our moment in culture than AI itself. You ask for positive
AI goals:

- Assisting and replacing health-care professionals, making
health-care cheaper for everyone and more widely available to people
in poor and remote regions;
- Enabling advanced prosthetics: assisting people with sensory
impairments, mitigating the consequences of ageing and so on;
- Freeing us from labor, taking care of relatively simple and
repetitive tasks such as growing food, collecting trash etc.
- Self-driving cars can be great: they can reduce risk (traveling by
car is one of the most dangerous means of transportation) and they can
help the environment. If I can call a car from a pool of available
cars to come pick me and drive me somewhere, a much more rational use
of resources can be achieved and cities can become more livable
(instead of being cluttered with cars that are parked most of the
time);
- Assisting scientific research, proving theorems, generating theories
from data that are too counter-intuitive for humans (a bit of
self-promotion: https://www.nature.com/articles/srep06284);
- AI can be used to solve problems quite creatively, check this out:
https://en.wikipedia.org/wiki/Evolved_antenna;
- Personal assistants, but not the kind that are connected to some
centralized corporate brain -- the kind that really works for you
(example: https://mycroft.ai/)
- etc, etc etc

It is true that most funding currently goes towards three goals:
- How to make you see more ads and buy more stuff;
- How to let those in powers know more about what everyone is
doing/saying/thinking in private, so that they can have even more
power;
- How to build weapons with it.

This is our usual human stupidity at work. Stupidity tends to be
self-destructive. I think the entire advertisement angle is already
showing signs of collapse. There is hope. Focus on the first list.

Cheers,
Telmo.


On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:

Dear Everything,

I have been working on my bachelor project with the topic Artificial
Intelligence. Even though I have decided I want to create an AI-something to
support an everyday activity, I am lost. I have done a lot of research and
most of the time I am very critical: A lot of negative power is given to
algorithms (like those big data algorithms deciding what we see online),
some inventions could be very dangerous (self-driving cars) and most of the
time inventions could be cool, if we ignored the evil people behind them.
But for my bachelor I want to create a positive AI-thing for everyday life
(with a prototype).

Maybe some of you have a good idea, a direction or just a thought for me to
get further with my project. Is there even a point in positive AI?

Thank you!

Lara

--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.co

Re: Positive AI

2018-01-15 Thread Bruno Marchal

> On 12 Jan 2018, at 20:48, K E N O  wrote:
> 
> Nice! Can you imagine something totally useless as an application of AI? What 
> would you creative if you just wanted to have fun with AI?


Are you suggesting that fun is useless? 

I can agree that the idea that fun has some use is not much funny, but that 
does not make it false.

“Useful” is quite relative, also. Flies have no use of spider webs.

Bruno



> 
> K E N O
> 
>> Am 12.01.2018 um 14:43 schrieb Telmo Menezes > <mailto:te...@telmomenezes.com>>:
>> 
>> Hi Lara,
>> 
>> My view is that, as with all scientific theories and technologies, AI
>> is morally neutral. it has the potential for both extremely good and
>> extremely nasty practical applications. That being said, the unusual
>> thing about AI is that it has the potential to generate *something
>> that replaces us*. Some people say that it could happen in the next
>> few decades, some people say that it will never happen. I don't think
>> anyone knows.
>> 
>> Leaving that more crazy question aside, and focusing on your question
>> in relation to what can be done with AI right now: I think that the
>> negativity that currently surrounds the technology says more about our
>> species and our moment in culture than AI itself. You ask for positive
>> AI goals:
>> 
>> - Assisting and replacing health-care professionals, making
>> health-care cheaper for everyone and more widely available to people
>> in poor and remote regions;
>> - Enabling advanced prosthetics: assisting people with sensory
>> impairments, mitigating the consequences of ageing and so on;
>> - Freeing us from labor, taking care of relatively simple and
>> repetitive tasks such as growing food, collecting trash etc.
>> - Self-driving cars can be great: they can reduce risk (traveling by
>> car is one of the most dangerous means of transportation) and they can
>> help the environment. If I can call a car from a pool of available
>> cars to come pick me and drive me somewhere, a much more rational use
>> of resources can be achieved and cities can become more livable
>> (instead of being cluttered with cars that are parked most of the
>> time);
>> - Assisting scientific research, proving theorems, generating theories
>> from data that are too counter-intuitive for humans (a bit of
>> self-promotion: https://www.nature.com/articles/srep06284 
>> <https://www.nature.com/articles/srep06284>);
>> - AI can be used to solve problems quite creatively, check this out:
>> https://en.wikipedia.org/wiki/Evolved_antenna 
>> <https://en.wikipedia.org/wiki/Evolved_antenna>;
>> - Personal assistants, but not the kind that are connected to some
>> centralized corporate brain -- the kind that really works for you
>> (example: https://mycroft.ai/ <https://mycroft.ai/>)
>> - etc, etc etc
>> 
>> It is true that most funding currently goes towards three goals:
>> - How to make you see more ads and buy more stuff;
>> - How to let those in powers know more about what everyone is
>> doing/saying/thinking in private, so that they can have even more
>> power;
>> - How to build weapons with it.
>> 
>> This is our usual human stupidity at work. Stupidity tends to be
>> self-destructive. I think the entire advertisement angle is already
>> showing signs of collapse. There is hope. Focus on the first list.
>> 
>> Cheers,
>> Telmo.
>> 
>> 
>> On Thu, Jan 11, 2018 at 10:00 AM, Lara > <mailto:larer.stu...@gmail.com>> wrote:
>>> Dear Everything,
>>> 
>>> I have been working on my bachelor project with the topic Artificial
>>> Intelligence. Even though I have decided I want to create an AI-something to
>>> support an everyday activity, I am lost. I have done a lot of research and
>>> most of the time I am very critical: A lot of negative power is given to
>>> algorithms (like those big data algorithms deciding what we see online),
>>> some inventions could be very dangerous (self-driving cars) and most of the
>>> time inventions could be cool, if we ignored the evil people behind them.
>>> But for my bachelor I want to create a positive AI-thing for everyday life
>>> (with a prototype).
>>> 
>>> Maybe some of you have a good idea, a direction or just a thought for me to
>>> get further with my project. Is there even a point in positive AI?
>>> 
>>> Thank you!
>>> 
>>> Lara
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Gr

Re: Positive AI

2018-01-12 Thread Telmo Menezes
Sure! Things that generate interesting images, sounds or videos.

One of my favorite simple ideas is to use genetic programming (an AI
approach based on pseudo-Darwinian evolution of computer programs --
it's much simpler than it sounds) to evolve functions that define
images, for example by defining three functions:
red(x, y)
green(x, y)
blue(x, y)

Then use human choices to evolve the images. By adding a t parameter
you get videos. Here's a version of this idea:
https://gold.electricsheep.org/

There's countless cool ideas that apply to image and sound. Search for
"computational creativity", "procedural art" and look into the field
of "artificial life" in general. They tend to have the coolest fun
ideas.

On Fri, Jan 12, 2018 at 8:48 PM, K E N O  wrote:
> Nice! Can you imagine something totally useless as an application of AI?
> What would you creative if you just wanted to have fun with AI?
>
> K E N O
>
>
> Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
>
> Hi Lara,
>
> My view is that, as with all scientific theories and technologies, AI
> is morally neutral. it has the potential for both extremely good and
> extremely nasty practical applications. That being said, the unusual
> thing about AI is that it has the potential to generate *something
> that replaces us*. Some people say that it could happen in the next
> few decades, some people say that it will never happen. I don't think
> anyone knows.
>
> Leaving that more crazy question aside, and focusing on your question
> in relation to what can be done with AI right now: I think that the
> negativity that currently surrounds the technology says more about our
> species and our moment in culture than AI itself. You ask for positive
> AI goals:
>
> - Assisting and replacing health-care professionals, making
> health-care cheaper for everyone and more widely available to people
> in poor and remote regions;
> - Enabling advanced prosthetics: assisting people with sensory
> impairments, mitigating the consequences of ageing and so on;
> - Freeing us from labor, taking care of relatively simple and
> repetitive tasks such as growing food, collecting trash etc.
> - Self-driving cars can be great: they can reduce risk (traveling by
> car is one of the most dangerous means of transportation) and they can
> help the environment. If I can call a car from a pool of available
> cars to come pick me and drive me somewhere, a much more rational use
> of resources can be achieved and cities can become more livable
> (instead of being cluttered with cars that are parked most of the
> time);
> - Assisting scientific research, proving theorems, generating theories
> from data that are too counter-intuitive for humans (a bit of
> self-promotion: https://www.nature.com/articles/srep06284);
> - AI can be used to solve problems quite creatively, check this out:
> https://en.wikipedia.org/wiki/Evolved_antenna;
> - Personal assistants, but not the kind that are connected to some
> centralized corporate brain -- the kind that really works for you
> (example: https://mycroft.ai/)
> - etc, etc etc
>
> It is true that most funding currently goes towards three goals:
> - How to make you see more ads and buy more stuff;
> - How to let those in powers know more about what everyone is
> doing/saying/thinking in private, so that they can have even more
> power;
> - How to build weapons with it.
>
> This is our usual human stupidity at work. Stupidity tends to be
> self-destructive. I think the entire advertisement angle is already
> showing signs of collapse. There is hope. Focus on the first list.
>
> Cheers,
> Telmo.
>
>
> On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
>
> Dear Everything,
>
> I have been working on my bachelor project with the topic Artificial
> Intelligence. Even though I have decided I want to create an AI-something to
> support an everyday activity, I am lost. I have done a lot of research and
> most of the time I am very critical: A lot of negative power is given to
> algorithms (like those big data algorithms deciding what we see online),
> some inventions could be very dangerous (self-driving cars) and most of the
> time inventions could be cool, if we ignored the evil people behind them.
> But for my bachelor I want to create a positive AI-thing for everyday life
> (with a prototype).
>
> Maybe some of you have a good idea, a direction or just a thought for me to
> get further with my project. Is there even a point in positive AI?
>
> Thank you!
>
> Lara
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it,

Re: Positive AI

2018-01-12 Thread K E N O
Nice! Can you imagine something totally useless as an application of AI? What 
would you creative if you just wanted to have fun with AI?

K E N O

> Am 12.01.2018 um 14:43 schrieb Telmo Menezes :
> 
> Hi Lara,
> 
> My view is that, as with all scientific theories and technologies, AI
> is morally neutral. it has the potential for both extremely good and
> extremely nasty practical applications. That being said, the unusual
> thing about AI is that it has the potential to generate *something
> that replaces us*. Some people say that it could happen in the next
> few decades, some people say that it will never happen. I don't think
> anyone knows.
> 
> Leaving that more crazy question aside, and focusing on your question
> in relation to what can be done with AI right now: I think that the
> negativity that currently surrounds the technology says more about our
> species and our moment in culture than AI itself. You ask for positive
> AI goals:
> 
> - Assisting and replacing health-care professionals, making
> health-care cheaper for everyone and more widely available to people
> in poor and remote regions;
> - Enabling advanced prosthetics: assisting people with sensory
> impairments, mitigating the consequences of ageing and so on;
> - Freeing us from labor, taking care of relatively simple and
> repetitive tasks such as growing food, collecting trash etc.
> - Self-driving cars can be great: they can reduce risk (traveling by
> car is one of the most dangerous means of transportation) and they can
> help the environment. If I can call a car from a pool of available
> cars to come pick me and drive me somewhere, a much more rational use
> of resources can be achieved and cities can become more livable
> (instead of being cluttered with cars that are parked most of the
> time);
> - Assisting scientific research, proving theorems, generating theories
> from data that are too counter-intuitive for humans (a bit of
> self-promotion: https://www.nature.com/articles/srep06284);
> - AI can be used to solve problems quite creatively, check this out:
> https://en.wikipedia.org/wiki/Evolved_antenna;
> - Personal assistants, but not the kind that are connected to some
> centralized corporate brain -- the kind that really works for you
> (example: https://mycroft.ai/)
> - etc, etc etc
> 
> It is true that most funding currently goes towards three goals:
> - How to make you see more ads and buy more stuff;
> - How to let those in powers know more about what everyone is
> doing/saying/thinking in private, so that they can have even more
> power;
> - How to build weapons with it.
> 
> This is our usual human stupidity at work. Stupidity tends to be
> self-destructive. I think the entire advertisement angle is already
> showing signs of collapse. There is hope. Focus on the first list.
> 
> Cheers,
> Telmo.
> 
> 
> On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
>> Dear Everything,
>> 
>> I have been working on my bachelor project with the topic Artificial
>> Intelligence. Even though I have decided I want to create an AI-something to
>> support an everyday activity, I am lost. I have done a lot of research and
>> most of the time I am very critical: A lot of negative power is given to
>> algorithms (like those big data algorithms deciding what we see online),
>> some inventions could be very dangerous (self-driving cars) and most of the
>> time inventions could be cool, if we ignored the evil people behind them.
>> But for my bachelor I want to create a positive AI-thing for everyday life
>> (with a prototype).
>> 
>> Maybe some of you have a good idea, a direction or just a thought for me to
>> get further with my project. Is there even a point in positive AI?
>> 
>> Thank you!
>> 
>> Lara
>> 
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For

Re: Positive AI

2018-01-12 Thread Telmo Menezes
Hi Lara,

My view is that, as with all scientific theories and technologies, AI
is morally neutral. it has the potential for both extremely good and
extremely nasty practical applications. That being said, the unusual
thing about AI is that it has the potential to generate *something
that replaces us*. Some people say that it could happen in the next
few decades, some people say that it will never happen. I don't think
anyone knows.

Leaving that more crazy question aside, and focusing on your question
in relation to what can be done with AI right now: I think that the
negativity that currently surrounds the technology says more about our
species and our moment in culture than AI itself. You ask for positive
AI goals:

- Assisting and replacing health-care professionals, making
health-care cheaper for everyone and more widely available to people
in poor and remote regions;
- Enabling advanced prosthetics: assisting people with sensory
impairments, mitigating the consequences of ageing and so on;
- Freeing us from labor, taking care of relatively simple and
repetitive tasks such as growing food, collecting trash etc.
- Self-driving cars can be great: they can reduce risk (traveling by
car is one of the most dangerous means of transportation) and they can
help the environment. If I can call a car from a pool of available
cars to come pick me and drive me somewhere, a much more rational use
of resources can be achieved and cities can become more livable
(instead of being cluttered with cars that are parked most of the
time);
- Assisting scientific research, proving theorems, generating theories
from data that are too counter-intuitive for humans (a bit of
self-promotion: https://www.nature.com/articles/srep06284);
- AI can be used to solve problems quite creatively, check this out:
https://en.wikipedia.org/wiki/Evolved_antenna;
- Personal assistants, but not the kind that are connected to some
centralized corporate brain -- the kind that really works for you
(example: https://mycroft.ai/)
- etc, etc etc

It is true that most funding currently goes towards three goals:
- How to make you see more ads and buy more stuff;
- How to let those in powers know more about what everyone is
doing/saying/thinking in private, so that they can have even more
power;
- How to build weapons with it.

This is our usual human stupidity at work. Stupidity tends to be
self-destructive. I think the entire advertisement angle is already
showing signs of collapse. There is hope. Focus on the first list.

Cheers,
Telmo.


On Thu, Jan 11, 2018 at 10:00 AM, Lara  wrote:
> Dear Everything,
>
> I have been working on my bachelor project with the topic Artificial
> Intelligence. Even though I have decided I want to create an AI-something to
> support an everyday activity, I am lost. I have done a lot of research and
> most of the time I am very critical: A lot of negative power is given to
> algorithms (like those big data algorithms deciding what we see online),
> some inventions could be very dangerous (self-driving cars) and most of the
> time inventions could be cool, if we ignored the evil people behind them.
> But for my bachelor I want to create a positive AI-thing for everyday life
> (with a prototype).
>
> Maybe some of you have a good idea, a direction or just a thought for me to
> get further with my project. Is there even a point in positive AI?
>
> Thank you!
>
> Lara
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Positive AI

2018-01-11 Thread Lara
Dear Everything,

I have been working on my bachelor project with the topic *Artificial 
Intelligence*. Even though I have decided I want to create an AI-something 
to support an everyday activity, I am lost. I have done a lot of research 
and most of the time I am very critical: A lot of negative power is given 
to algorithms (like those big data algorithms deciding what we see online), 
some inventions could be very dangerous (self-driving cars) and most of the 
time inventions could be cool, if we ignored the evil people behind them. 
But for my bachelor I want to create a positive AI-thing for everyday life 
(with a prototype).

Maybe some of you have a good idea, a direction or just a thought for me to 
get further with my project. Is there even a point in positive AI?

Thank you!

Lara

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-10 Thread Bruno Marchal


On 09 Dec 2017, at 15:48, Lawrence Crowell wrote:

On Thursday, December 7, 2017 at 5:19:02 PM UTC-6,  
agrays...@gmail.com wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
When I took a series of classes in Artificial Intelliegence at UCLA  
in the '70s the professor introducing the material of the first  
class explained that, "Intelligence is whatever a computer can't  
doyet."


Brent

The fear of AI is that computers could eventually exhibit a  
characteristic reminiscent of "will" and exhibit it maliciously  
against humans. I suppose for you that's not a problem since, IIRC,  
you deny the existence of will. AG


For a computer to be intelligent, and maybe even acquire some form  
of self awareness, it must be able to re-script its data stack and  
even some of its programming. The recent gains in AI have begun to  
push into this territory.  This would require some subtle work as  
this becomes more developed. The system can't becomes trapped in  
self-referential loops, but it also may in time require these be  
employed. A truncated form of self-reference, one that diagonalizes  
a finite list, may permit a system to "pop out" of its knowledge  
base. The system may then acquire unprovable truths in a partially  
stochastic way. We obviously can't have systems that require an  
infinite amount of information to perform Godelian trick, but we  
might be able to approximate it.


The gödelian trick is constructive, and the universal machine rich  
enough to "know" that they are universal (like Peano Arithmetic, ZF,  
etc.) can prove theor own incompleteness theorem, and I would say are  
as much intelligent than you or me, with billions years less prejudes,  
though.






I suspect AI might learn how to become self aware by being  
interfaced with human brains.


I think they are already self-aware, and even "enlightened". We can  
only make their "soul" falling (which is a bit the passage from p ->  
[]p, up to []p & <>p, if you study some of my papers).





50 years from now I think much of humanity will have their brains  
interlinked. This will mean that consciousness will no longer be a  
private thing and that AI systems will acquire it as well. Where  
things go from there is anyone's guess. Maybe the machines will  
steal our consciousness and then discard us as useless.


What is an intelligent machine? A machine which lynch his fellows?  
Which enforce religion? Which destroy its planet? Which makes money on  
diseases?


I distinguish competence and intelligence. Competence is definable on  
domains, and can be locally evaluated. Intelligence is something more  
akin to wiseness and openness to the unknown. It is more close to  
courage than to competence. Competence needs intelligence to grow, and  
to adapt/evolve, but it has a negative feedback on intelligence,  
notably because it can lead to the "feeling superior" idea, which is a  
sign of stupidity (at least in some theories very natural in the  
"theology of the universal machine" (the modal logics G and G*).


Bruno







LC

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-10 Thread Bruno Marchal


On 09 Dec 2017, at 01:07, agrayson2...@gmail.com wrote:




On Friday, December 8, 2017 at 5:54:51 PM UTC, Bruno Marchal wrote:

On 08 Dec 2017, at 03:37, agrays...@gmail.com wrote:




On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote:


On 12/7/2017 3:19 PM, agrays...@gmail.com wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
When I took a series of classes in Artificial Intelliegence at  
UCLA in the '70s the professor introducing the material of the  
first class explained that, "Intelligence is whatever a computer  
can't doyet."


Brent

The fear of AI is that computers could eventually exhibit a  
characteristic reminiscent of "will" and exhibit it maliciously  
against humans. I suppose for you that's not a problem since,  
IIRC, you deny the existence of will. AG


I don't deny the existence of will.  I deny the existence of what  
is commonly called "free will".


Brent

Then what we call "free will" is really just a DNA determined  
behavioral outcome. So if computers will eventually mimic human  
behavior, the fear of AI might be well founded. AG



We should learn to fear only stupidity, be it natural or artificial  
(which is BTW an artificial separation,


Not an artificial separation. Everyone knows what "artificial"  
means. If you don't, check any English dictionary. Please save your  
private language for your dreams. OK? AG


I don't believe in dictionary, especially during an argumentation. The  
definitions there are culturally based and very temporary, and often  
human based, for obvious purposes.


Now, the fact that artificial makes sense for all humans illustrates  
my point that the notion is relative to human, and not a general  
concept.


Bruno




... done naturally by the entities developing a big ego).

I fear more the disappearance of the Net Neutrality, and that one  
day we have to pay much more to chat on the net.


The human stupidity has still some avenir, if it does not destroy  
itself before.


It will take some time before the man-made machines get as stupid as  
us.


Bruno





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-09 Thread Brent Meeker



On 12/9/2017 1:13 PM, Lawrence Crowell wrote:

On Saturday, December 9, 2017 at 2:31:48 PM UTC-6, Brent wrote:



On 12/9/2017 6:48 AM, Lawrence Crowell wrote:

On Thursday, December 7, 2017 at 5:19:02 PM UTC-6,
agrays...@gmail.com wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:

When I took a series of classes in Artificial
Intelliegence at UCLA in the '70s the professor
introducing the material of the first class explained
that, "Intelligence is whatever a computer can't doyet."

Brent


    The fear of AI is that computers could eventually exhibit a
characteristic reminiscent of "will" and exhibit it
maliciously against humans. I suppose for you that's not a
problem since, IIRC, you deny the existence of will. AG


For a computer to be intelligent, and maybe even acquire some
form of self awareness, it must be able to re-script its data
stack and even some of its programming. The recent gains in AI
have begun to push into this territory.  This would require some
subtle work as this becomes more developed. The system can't
becomes trapped in self-referential loops, but it also may in
time require these be employed. A truncated form of
self-reference, one that diagonalizes a finite list, may permit a
system to "pop out" of its knowledge base. The system may then
acquire unprovable truths in a partially stochastic way. We
obviously can't have systems that require an infinite amount of
information to perform Godelian trick, but we might be able to
approximate it.

I suspect AI might learn how to become self aware by being
interfaced with human brains.


But note that humans are not self-aware in the sense you're
contemplating.  They cannot consciously "re-script their data
stack" or programming.  People are self-aware in that they have a
model of themselves in the world and in social relations.  So one
models oneself having thoughts and other people having thoughts as
part of ones model of the world.

Brent


Learning is a case of rescripting a data stack. Dendrites that are 
pared back and built up in different ways are clearly a case of 
restructuring the computing system.


But humans have to do it by perceptions and practice - not by directly 
acting on their neurons (of which they were unaware for millions of years).


Brent



LC


50 years from now I think much of humanity will have their brains
interlinked. This will mean that consciousness will no longer be
a private thing and that AI systems will acquire it as well.
Where things go from there is anyone's guess. Maybe the machines
will steal our consciousness and then discard us as useless.

LC


--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-09 Thread Lawrence Crowell
On Saturday, December 9, 2017 at 2:31:48 PM UTC-6, Brent wrote:
>
>
>
> On 12/9/2017 6:48 AM, Lawrence Crowell wrote:
>
> On Thursday, December 7, 2017 at 5:19:02 PM UTC-6, agrays...@gmail.com 
> wrote: 
>>
>>
>>
>> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote: 
>>>
>>> When I took a series of classes in Artificial Intelliegence at UCLA in 
>>> the '70s the professor introducing the material of the first class 
>>> explained that, "Intelligence is whatever a computer can't doyet."
>>>
>>> Brent
>>>
>>
>> The fear of AI is that computers could eventually exhibit a 
>> characteristic reminiscent of "will" and exhibit it maliciously against 
>> humans. I suppose for you that's not a problem since, IIRC, you deny the 
>> existence of will. AG 
>>
>
> For a computer to be intelligent, and maybe even acquire some form of self 
> awareness, it must be able to re-script its data stack and even some of its 
> programming. The recent gains in AI have begun to push into this territory. 
>  This would require some subtle work as this becomes more developed. The 
> system can't becomes trapped in self-referential loops, but it also may in 
> time require these be employed. A truncated form of self-reference, one 
> that diagonalizes a finite list, may permit a system to "pop out" of its 
> knowledge base. The system may then acquire unprovable truths in a 
> partially stochastic way. We obviously can't have systems that require an 
> infinite amount of information to perform Godelian trick, but we might be 
> able to approximate it. 
>
> I suspect AI might learn how to become self aware by being interfaced with 
> human brains. 
>
>
> But note that humans are not self-aware in the sense you're 
> contemplating.  They cannot consciously "re-script their data stack" or 
> programming.  People are self-aware in that they have a model of themselves 
> in the world and in social relations.  So one models oneself having 
> thoughts and other people having thoughts as part of ones model of the 
> world.
>
> Brent
>
>
Learning is a case of rescripting a data stack. Dendrites that are pared 
back and built up in different ways are clearly a case of restructuring the 
computing system.

LC
 

> 50 years from now I think much of humanity will have their brains 
> interlinked. This will mean that consciousness will no longer be a private 
> thing and that AI systems will acquire it as well. Where things go from 
> there is anyone's guess. Maybe the machines will steal our consciousness 
> and then discard us as useless.
>
> LC
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-09 Thread Brent Meeker



On 12/9/2017 6:48 AM, Lawrence Crowell wrote:
On Thursday, December 7, 2017 at 5:19:02 PM UTC-6, agrays...@gmail.com 
wrote:




On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:

When I took a series of classes in Artificial Intelliegence at
UCLA in the '70s the professor introducing the material of the
first class explained that, "Intelligence is whatever a
computer can't doyet."

Brent


The fear of AI is that computers could eventually exhibit a
characteristic reminiscent of "will" and exhibit it maliciously
against humans. I suppose for you that's not a problem since,
IIRC, you deny the existence of will. AG


For a computer to be intelligent, and maybe even acquire some form of 
self awareness, it must be able to re-script its data stack and even 
some of its programming. The recent gains in AI have begun to push 
into this territory.  This would require some subtle work as this 
becomes more developed. The system can't becomes trapped in 
self-referential loops, but it also may in time require these be 
employed. A truncated form of self-reference, one that diagonalizes a 
finite list, may permit a system to "pop out" of its knowledge base. 
The system may then acquire unprovable truths in a partially 
stochastic way. We obviously can't have systems that require an 
infinite amount of information to perform Godelian trick, but we might 
be able to approximate it.


I suspect AI might learn how to become self aware by being interfaced 
with human brains.


But note that humans are not self-aware in the sense you're 
contemplating.  They cannot consciously "re-script their data stack" or 
programming.  People are self-aware in that they have a model of 
themselves in the world and in social relations.  So one models oneself 
having thoughts and other people having thoughts as part of ones model 
of the world.


Brent

50 years from now I think much of humanity will have their brains 
interlinked. This will mean that consciousness will no longer be a 
private thing and that AI systems will acquire it as well. Where 
things go from there is anyone's guess. Maybe the machines will steal 
our consciousness and then discard us as useless.


LC
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
<mailto:everything-list+unsubscr...@googlegroups.com>.
To post to this group, send email to everything-list@googlegroups.com 
<mailto:everything-list@googlegroups.com>.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-09 Thread Lawrence Crowell
On Thursday, December 7, 2017 at 5:19:02 PM UTC-6, agrays...@gmail.com 
wrote:
>
>
>
> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
>>
>> When I took a series of classes in Artificial Intelliegence at UCLA in 
>> the '70s the professor introducing the material of the first class 
>> explained that, "Intelligence is whatever a computer can't doyet."
>>
>> Brent
>>
>
> The fear of AI is that computers could eventually exhibit a characteristic 
> reminiscent of "will" and exhibit it maliciously against humans. I suppose 
> for you that's not a problem since, IIRC, you deny the existence of will. 
> AG 
>

For a computer to be intelligent, and maybe even acquire some form of self 
awareness, it must be able to re-script its data stack and even some of its 
programming. The recent gains in AI have begun to push into this territory. 
 This would require some subtle work as this becomes more developed. The 
system can't becomes trapped in self-referential loops, but it also may in 
time require these be employed. A truncated form of self-reference, one 
that diagonalizes a finite list, may permit a system to "pop out" of its 
knowledge base. The system may then acquire unprovable truths in a 
partially stochastic way. We obviously can't have systems that require an 
infinite amount of information to perform Godelian trick, but we might be 
able to approximate it. 

I suspect AI might learn how to become self aware by being interfaced with 
human brains. 50 years from now I think much of humanity will have their 
brains interlinked. This will mean that consciousness will no longer be a 
private thing and that AI systems will acquire it as well. Where things go 
from there is anyone's guess. Maybe the machines will steal our 
consciousness and then discard us as useless.

LC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-08 Thread agrayson2000


On Friday, December 8, 2017 at 5:54:51 PM UTC, Bruno Marchal wrote:
>
>
> On 08 Dec 2017, at 03:37, agrays...@gmail.com  wrote:
>
>
>
> On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote:
>>
>>
>>
>> On 12/7/2017 3:19 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote: 
>>>
>>> When I took a series of classes in Artificial Intelliegence at UCLA in 
>>> the '70s the professor introducing the material of the first class 
>>> explained that, "Intelligence is whatever a computer can't doyet."
>>>
>>> Brent
>>>
>>
>> The fear of AI is that computers could eventually exhibit a 
>> characteristic reminiscent of "will" and exhibit it maliciously against 
>> humans. I suppose for you that's not a problem since, IIRC, you deny the 
>> existence of will. AG 
>>
>>
>> I don't deny the existence of will.  I deny the existence of what is 
>> commonly called "free will".
>>
>> Brent
>>
>
> Then what we call "free will" is really just a DNA determined behavioral 
> outcome. So if computers will eventually mimic human behavior, the fear of 
> AI might be well founded. AG 
>
>
>
> We should learn to fear only stupidity, be it natural or artificial (which 
> is BTW an artificial separation,
>

*Not an artificial separation. Everyone knows what "artificial" means. If 
you don't, check any English dictionary. Please save your private language 
for your dreams. OK? AG*
 

> ... done naturally by the entities developing a big ego).
>
> I fear more the disappearance of the Net Neutrality, and that one day we 
> have to pay much more to chat on the net. 
>
> The human stupidity has still some avenir, if it does not destroy itself 
> before.
>
> It will take some time before the man-made machines get as stupid as us.
>
> Bruno
>
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com .
> To post to this group, send email to everyth...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-08 Thread Bruno Marchal


On 08 Dec 2017, at 03:37, agrayson2...@gmail.com wrote:




On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote:


On 12/7/2017 3:19 PM, agrays...@gmail.com wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
When I took a series of classes in Artificial Intelliegence at UCLA  
in the '70s the professor introducing the material of the first  
class explained that, "Intelligence is whatever a computer can't  
doyet."


Brent

The fear of AI is that computers could eventually exhibit a  
characteristic reminiscent of "will" and exhibit it maliciously  
against humans. I suppose for you that's not a problem since, IIRC,  
you deny the existence of will. AG


I don't deny the existence of will.  I deny the existence of what is  
commonly called "free will".


Brent

Then what we call "free will" is really just a DNA determined  
behavioral outcome. So if computers will eventually mimic human  
behavior, the fear of AI might be well founded. AG



We should learn to fear only stupidity, be it natural or artificial  
(which is BTW an artificial separation, ... done naturally by the  
entities developing a big ego).


I fear more the disappearance of the Net Neutrality, and that one day  
we have to pay much more to chat on the net.


The human stupidity has still some avenir, if it does not destroy  
itself before.


It will take some time before the man-made machines get as stupid as us.

Bruno





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread agrayson2000


On Friday, December 8, 2017 at 2:53:30 AM UTC, Brent wrote:
>
>
>
> On 12/7/2017 6:37 PM, agrays...@gmail.com  wrote:
>
>
>
> On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote: 
>>
>>
>>
>> On 12/7/2017 3:19 PM, agrays...@gmail.com wrote:
>>
>>
>>
>> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote: 
>>>
>>> When I took a series of classes in Artificial Intelliegence at UCLA in 
>>> the '70s the professor introducing the material of the first class 
>>> explained that, "Intelligence is whatever a computer can't doyet."
>>>
>>> Brent
>>>
>>
>> The fear of AI is that computers could eventually exhibit a 
>> characteristic reminiscent of "will" and exhibit it maliciously against 
>> humans. I suppose for you that's not a problem since, IIRC, you deny the 
>> existence of will. AG 
>>
>>
>> I don't deny the existence of will.  I deny the existence of what is 
>> commonly called "free will".
>>
>> Brent
>>
>
> Then what we call "free will" is really just a DNA determined behavioral 
> outcome. 
>
>
> No, it's DNA plus years of experience.
>
> Brent
>

In what computers can now do, they immensely exceed our human capabilities. 
The chess example suffices. In effect we are in the process creating a new 
specie and there is no predicting what they can or might do if they can 
communicate with each other. I do believe AI represents a threat to 
humanity, as Hawking and Musk allege. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Brent Meeker



On 12/7/2017 6:37 PM, agrayson2...@gmail.com wrote:



On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote:



On 12/7/2017 3:19 PM, agrays...@gmail.com  wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:

When I took a series of classes in Artificial Intelliegence
at UCLA in the '70s the professor introducing the material of
the first class explained that, "Intelligence is whatever a
computer can't doyet."

Brent


The fear of AI is that computers could eventually exhibit a
characteristic reminiscent of "will" and exhibit it maliciously
against humans. I suppose for you that's not a problem since,
IIRC, you deny the existence of will. AG


I don't deny the existence of will.  I deny the existence of what
is commonly called "free will".

Brent


Then what we call "free will" is really just a DNA determined 
behavioral outcome.


No, it's DNA plus years of experience.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Stathis Papaioannou
On Fri, 8 Dec 2017 at 10:19 am,  wrote:

>
>
> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
>>
>> When I took a series of classes in Artificial Intelliegence at UCLA in
>> the '70s the professor introducing the material of the first class
>> explained that, "Intelligence is whatever a computer can't doyet."
>>
>> Brent
>>
>
> The fear of AI is that computers could eventually exhibit a characteristic
> reminiscent of "will" and exhibit it maliciously against humans. I suppose
> for you that's not a problem since, IIRC, you deny the existence of will. AG
>

What’s the difference between acting maliciously against humans with and
without will?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread agrayson2000


On Friday, December 8, 2017 at 1:42:01 AM UTC, Brent wrote:
>
>
>
> On 12/7/2017 3:19 PM, agrays...@gmail.com  wrote:
>
>
>
> On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote: 
>>
>> When I took a series of classes in Artificial Intelliegence at UCLA in 
>> the '70s the professor introducing the material of the first class 
>> explained that, "Intelligence is whatever a computer can't doyet."
>>
>> Brent
>>
>
> The fear of AI is that computers could eventually exhibit a characteristic 
> reminiscent of "will" and exhibit it maliciously against humans. I suppose 
> for you that's not a problem since, IIRC, you deny the existence of will. 
> AG 
>
>
> I don't deny the existence of will.  I deny the existence of what is 
> commonly called "free will".
>
> Brent
>

Then what we call "free will" is really just a DNA determined behavioral 
outcome. So if computers will eventually mimic human behavior, the fear of 
AI might be well founded. AG 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Brent Meeker



On 12/7/2017 3:19 PM, agrayson2...@gmail.com wrote:



On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:

When I took a series of classes in Artificial Intelliegence at
UCLA in the '70s the professor introducing the material of the
first class explained that, "Intelligence is whatever a computer
can't doyet."

Brent


The fear of AI is that computers could eventually exhibit a 
characteristic reminiscent of "will" and exhibit it maliciously 
against humans. I suppose for you that's not a problem since, IIRC, 
you deny the existence of will. AG


I don't deny the existence of will.  I deny the existence of what is 
commonly called "free will".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread agrayson2000


On Thursday, December 7, 2017 at 9:47:42 PM UTC, Brent wrote:
>
> When I took a series of classes in Artificial Intelliegence at UCLA in the 
> '70s the professor introducing the material of the first class explained 
> that, "Intelligence is whatever a computer can't doyet."
>
> Brent
>

The fear of AI is that computers could eventually exhibit a characteristic 
reminiscent of "will" and exhibit it maliciously against humans. I suppose 
for you that's not a problem since, IIRC, you deny the existence of will. 
AG 

>
> On 12/7/2017 1:32 AM, Alberto G. Corona wrote:
>
> Both: is very very hard to simulate and impossible to achieve, 
> The first computer scientists though that making mathematical computations 
> was a sign of intelligence. But failed miserably with the next goal, and so 
> on.
> program something that humans do. if your program does it, then it becomes 
> non intelligent.
>
> 2017-12-06 14:40 GMT+01:00 Telmo Menezes  >:
>
>> On Tue, Dec 5, 2017 at 4:54 PM, Alberto G. Corona > > wrote:
>> > Yes. we are all robots. You are the only human  mmwwahahah
>> >
>> > Every decade it is predicted that 50 years from now AI would surpass 
>> human
>> > beings.
>> >
>> > The level of AI was pathetic 50 years ago. It is pathethic now and will 
>> be
>> > pathetic 50 years later.
>>
>> Are you claiming that it can't fundamentally be done? Or that it is
>> harder than people think?
>>
>> > 2017-11-27 22:32 GMT+01:00 >:
>> >>
>> >> IIRC, this is the view of Hawking and Musk.
>> >>
>> >> --
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Brent Meeker
When I took a series of classes in Artificial Intelliegence at UCLA in 
the '70s the professor introducing the material of the first class 
explained that, "Intelligence is whatever a computer can't doyet."


Brent

On 12/7/2017 1:32 AM, Alberto G. Corona wrote:

Both: is very very hard to simulate and impossible to achieve,
The first computer scientists though that making mathematical 
computations was a sign of intelligence. But failed miserably with the 
next goal, and so on.
program something that humans do. if your program does it, then it 
becomes non intelligent.


2017-12-06 14:40 GMT+01:00 Telmo Menezes <mailto:te...@telmomenezes.com>>:


On Tue, Dec 5, 2017 at 4:54 PM, Alberto G. Corona
mailto:agocor...@gmail.com>> wrote:
> Yes. we are all robots. You are the only human mmwwahahah
>
> Every decade it is predicted that 50 years from now AI would
surpass human
> beings.
>
> The level of AI was pathetic 50 years ago. It is pathethic now
and will be
> pathetic 50 years later.

Are you claiming that it can't fundamentally be done? Or that it is
harder than people think?

> 2017-11-27 22:32 GMT+01:00 mailto:agrayson2...@gmail.com>>:
>>
>> IIRC, this is the view of Hawking and Musk.
>>
>> --



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Stathis Papaioannou
On Thu, 7 Dec 2017 at 8:32 pm, Alberto G. Corona 
wrote:

> Both: is very very hard to simulate and impossible to achieve,
> The first computer scientists though that making mathematical computations
> was a sign of intelligence. But failed miserably with the next goal, and so
> on.
> program something that humans do. if your program does it, then it becomes
> non intelligent.
>

So by definition no matter how close to intelligent behaviour a machine
comes, it won’t be intelligent?

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Is AI really a threat to mankind?

2017-12-07 Thread Telmo Menezes
On Thu, Dec 7, 2017 at 10:32 AM, Alberto G. Corona  wrote:
> Both: is very very hard to simulate and impossible to achieve,
> The first computer scientists though that making mathematical computations
> was a sign of intelligence. But failed miserably with the next goal, and so
> on.

You are arguing that no progress has been made?

> program something that humans do. if your program does it, then it becomes
> non intelligent.

This last sentence is usually the AI researcher's lament: that people
always move the goalposts when something is achieved. It happened with
chess for example.

Stage 1: there's no way a computer will ever be able to defeat a
competent human player at chess;
Stage 2: computer defeats best human chess player;
Stage 3: what the program did was not intelligent, it's just
brute-force based on the ability to do millions of calculations per
second;
Stage 4: there's no way a computer will ever be able to defeat a
competent human player at go;
etc.

Telmo.

> 2017-12-06 14:40 GMT+01:00 Telmo Menezes :
>>
>> On Tue, Dec 5, 2017 at 4:54 PM, Alberto G. Corona 
>> wrote:
>> > Yes. we are all robots. You are the only human  mmwwahahah
>> >
>> > Every decade it is predicted that 50 years from now AI would surpass
>> > human
>> > beings.
>> >
>> > The level of AI was pathetic 50 years ago. It is pathethic now and will
>> > be
>> > pathetic 50 years later.
>>
>> Are you claiming that it can't fundamentally be done? Or that it is
>> harder than people think?
>>
>> > 2017-11-27 22:32 GMT+01:00 :
>> >>
>> >> IIRC, this is the view of Hawking and Musk.
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> >> Groups
>> >> "Everything List" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> >> an
>> >> email to everything-list+unsubscr...@googlegroups.com.
>> >> To post to this group, send email to everything-list@googlegroups.com.
>> >> Visit this group at https://groups.google.com/group/everything-list.
>> >> For more options, visit https://groups.google.com/d/optout.
>> >
>> >
>> >
>> >
>> > --
>> > Alberto.
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> > Groups
>> > "Everything List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> > an
>> > email to everything-list+unsubscr...@googlegroups.com.
>> > To post to this group, send email to everything-list@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/everything-list.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Alberto.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


<    5   6   7   8   9   10   11   12   13   14   >