________________________________
 From: John Clark <johnkcl...@gmail.com>
 

On Tue, Aug 27, 2013 at 6:55 PM, Chris de Morsella <cdemorse...@yahoo.com> 
wrote:



>>Bullshit. Axioms don't need proof, and the most fundamental axiom in all of 
>>logic is that X is Y or X is not Y.  Everything else is built on top of that. 
>> And only somebody who was absolutely desperate to prove the innate 
>>superiority of humans over computers would try to deny it.
>>

>
> You seem confused... the brain is not an axiom... 

But the fact that X is Y OR X is not Y sure as hell IS A AXIOM, and so is "a 
event happens for a reason OR a event does not happen for a reason". And first 
you tell me that the above is a tautology that is so obvious that I'm foolish 
for repeating it so often, but now you're insisting that it isn't true. So 
Chris, who is really confused around here?
 
X is Y OR X is not Y -- except when that which is being considered exists in a 
state of superposition :)
But sure its true in classic logic. And again I raise your bet with a big SO 
WHAT?
 
If X = Y AND Y = Z then X = Z  This is also logically true, but also has no 
substantial bearing on how the dynamic processes by which the mind arises from 
the 86 billion neuron and 100 trillion connection two phase (electro-chemical) 
network that comprises our brain
 

> Why you cling so tenaciously to this need for definitive causality chains (or 
> else it must be complete randomness) is amusing
>

>>I'm glad it brought some light to your otherwise drab existence, in fact 
>>because you find it so amusing and the fact that X is Y or X is not Y is so 
>>ubiquitous from now on you should find yourself in a constant state of 
>>hilarity.  

How totally pompous of you. What do you know of my existence, whether it be 
drab or dangerously on the edge? You have no knowledge of my existence and for 
you to characterize it -- from your position of gross ignorance reveals a 
gaping deficiency in your reasoning abilities or else a really bad character 
flaw. Could you please refrain from this engaging in these kinds of childish 
attributions and characterizations of someone whom you do not know at all but 
happen to be arguing with. Since I do not think you are stupid I must conclude 
you are pompous, which is not something I would be especially proud of if I 
were you.

 
> it [the brain] is one of the most complex systems we know about in the 
> observed universe.  
> 

Yes, and that is all the more reason to use reductionism if you want to study 
it. If you had to understand everything about it before you could understand 
anything about the brain (or anything else for that matter) you would remain in 
a constant state of complete ignorance about not just the brain but everything.


> You cannot show definitive causality for most of what goes on in most of the 
> universe.  

>> You just figured that out? Physicists have been telling us that some things 
>> happen for no reason (are random) for nearly a century.

AND when did I say random? I deal with randomness -- or more accurately pseudo 
randomness and how to account for it and use it -- all the time in my work 
life. But I am not referring to random events, I was describing the difficulty 
in tracing causality back from an outcome state Y to an originating (within the 
frame of reference) state Y. I was making the statement that because of the 
chaotic and highly parallelized nature of the brain that very often the attempt 
to work back and determine the causes is in practice impossible.
 
Now hopefully you will finally figure out what I have being trying to 
communicate to you and realize that my stating that it is impossible to work 
back from result X to initial state Y by trying to rewind events and work back 
step by step is not the same thing as saying that the outcome X is the result 
of some random process. The brain is not a random state machine, it has a 
definite direction of flow and we experience a clear and consistent outcome. 
Clearly there is cause and effect -- as well as a fair degree of randomness 
that works its way into outcomes along the complex chains of consensus building 
neuralcortex algorithms that seem to be operating in us -- and which by the way 
DARPA is highly interested in learning more about.

> You can hypothesize a causal relationship perhaps, but you cannot prove one 
> for all manner of phenomenon arising out of chaotic systems. The brain is a 
> noisy chaotic system and you are attempting to impose your Newtonian order on 
> it.  

>> If you're a fan of chaos computers are perfectly capable of producing it, in 
>> fact the very first computer program I ever wrote used chaos to produce the 
>> Mandelbrot set, a object of quite literally infinite complexity, although of 
>> course there was a limit to how much magnification my little computer could 
>> produce. 
 
I work with large systems for well known software companies and the spend my 
work days in computer code that operates them -- when I am not dragged into 
meetings or responding to emails on a email list :) 
Fractals are amazing and also heavily relied on to generate interesting, 
seemingly highly complex and realistic results using rather simple recursively 
applied algorithms; that sounds like it must have been a fun project.


> Your approach does not map well onto the problem domain. And what you say has 
> no predictive value; it does not help unravel how the brain works... or how 
> the mind arises within it.
>

>>That approach produced Watson! No doubt you will counter by saying that 
>>Watson has nothing to do with mind, and that is exactly why I don't believe 
>>you when you claim to be emotionally neutral and are judging the 
>>human-computer superiority issue strictly on the merits of the case.

Watson was based on self learning algorithms -- if I recall. If I am correct -- 
and I am pretty sure that the team that developed that software was very much 
relying on a self-learning approach and that Watson was hit with probably 
hundreds of thousands if not millions of queries in a process of self learning 
where the final ready state of the Watson system when it was put up to the 
challenge was the outcome of this non-linear approach. 

To characterize a self-learning machine that "learned" what associations -- and 
what meta-associations as well -- because it is often on the meta information 
that algorithms operate -- as an example of determinism is really stretching it.

Watson was so astoundingly successful on Jeopardy precisely - -I would argue -- 
because it adopted a non-linear and non-directed approach. Programmers did not 
provide Watson with the answers, rather over many generations of refinement of 
its own pattern recognition algorithms and know association maps -- clearly 
under the careful and close direction of the programmers (or is it better to 
use the term mentors)  -- Watson was able to demonstrate a remarkable ability 
to associate a correct answer from a Jeopardy question based on a very rapid 
lookups in its vast generalized store of knowledge. 

Watson was so successful -- precisely because it did not attempt to impose any 
deterministic algorithms, but rather sought to enable a dynamic process of 
self-learning after which the system was able to quite successfully discover 
pattern matches from often very arcane bits of information.

A very impressive demonstration of how uncannily close to AI a generalized 
pattern recognition machine like Watson can be with the state of the art in 
self learning systems.

The key term & methodology: is in fact SELF-LEARNING, which by definition is 
not pre-determined. The programmers did not encode the answers into Watson; 
rather Watson the software system "learned" what pattern recognition approaches 
worked and which conversely did not and it "evolved" it algorithms with the 
results that were so impressive.

>> I never claimed we would someday understand how to make an AI more 
>> intelligent than ourselves, I only said that someday such an AI would get 
>> made. 
 
> And how are you sure it has not already been achieved.

>>Because computers don't rule the world. Yet.

And you "know" this how? 

> What I said about needing to understand that which you are studying in order 
> to be able to really be able to manipulate, extend, emulate, simulate etc. is 
> not only true  -- as you admit  

>>I don't admit that at all! it is sufficient but not necessary.  
 
I begin to gather you are the type who will never admit anything. So be it.

> With no understanding of the symbol stream you have no knowledge of what to 
> do with the symbol stream passing across your view

>>And that is why even now we often don't understand what machines are doing or 
>>why; we let them keep on doing it however because whatever mysterious thing 
>>they're doing we figure it's probably important and don't dare stop it.

I am confused. What do you mean we don't understand what machines are doing?
We do though -- to a large degree and to a fine degree of detail -- understand 
how software systems are working -- even in the dynamic dimensions of a given 
operational instance. While we may not understand precisely what is happening 
or what code will be generated by some code generation process for example 
becoming the inputs or model of yet  another process; we do have a very good 
general idea of how it is all working and how to ensure that the outcomes that 
are being generated are of a high fidelity and have predictive value.

Yes some systems have become so deep, vastly extended and multi-nodal in nature 
that we probably can no longer really model them completely, but all of the 
processes that are at work in these vast distributed systems are well known to 
information scientists & engineers. While the detailed knowledge of the inner 
workings are beyond our knowledge -- though artifacts of the dynamic process 
may exist in logs for example --  they are generally understood in terms of 
design patterns etc. 

>  This applies to understanding the brain as well.. it is and will remain a 
>mystery until we go in and figure out its fine grained workings.  
>> It is entirely possible that we will never understand the fine grained 
>> workings of the brain, but that won't matter because the computers will 
>> understand it.

And you sate this based on what assumptions?

-Chris
 >> John K Clark   

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to