RE: Two reasons why computers IMHO cannot exhibit intelligence

2012-08-27 Thread William R. Buckley
Roger:

 

I suggest that at root, you have vitalist sympathies.

 

wrb

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Roger Clough
Sent: Monday, August 27, 2012 4:07 AM
To: everything-list
Subject: Two reasons why computers IMHO cannot exhibit intelligence

 

Hi meekerdb 

 

IMHO I don't think that computers can have intelligence

because intelligence consists of at least one ability:

the ability to make autonomous choices (choices completely

of one's own). Computers can do nothing on their own,

they can only do what softward and harfdware tells them to do. 

 

Another, closely related, reason, is that there must be an agent that does
the choosing,

and IMHO the agent has to be separate from the system.

Godel, perhaps, I speculate. 

 

 

Roger Clough,   rclo...@verizon.net

8/27/2012 

Leibniz would say, "If there's no God, we'd have to invent him so everything
could function."

- Receiving the following content - 

From: meekerdb   

Receiver: everything-list   

Time: 2012-08-26, 14:56:29

Subject: Re: Simple proof that our intelligence transcends that of computers

 

On 8/26/2012 10:25 AM, Bruno Marchal wrote:
>
> On 25 Aug 2012, at 12:35, Jason Resch wrote:
>
>>
>> I agree different implementations of intelligence have different
capabilities and 
>> roles, but I think computers are general enough to replicate any
intelligence (so long 
>> as infinities or true randomness are not required).
>
> And now a subtle point. Perhaps.
>
> The point is that computers are general enough to replicate intelligence
EVEN if 
> infinities and true randomness are required for it.
>
> Imagine that our consciousness require some ORACLE. For example under the
form of a some 
> non compressible sequence 11101111011000110101011011... (say)
>
> Being incompressible, that sequence cannot be part of my brain at my
substitution level, 
> because this would make it impossible for the doctor to copy my brain into
a finite 
> string. So such sequence operates "outside my brain", and if the doctor
copy me at the 
> right comp level, he will reconstitute me with the right "interface" to
the oracle, so I 
> will survive and stay conscious, despite my consciousness depends on that
oracle.
>
> Will the UD, just alone, or in arithmetic, be able to copy me in front of
that oracle?
>
> Yes, as the UD dovetails on all programs, but also on all inputs, and in
this case, he 
> will generate me successively (with large delays in between) in front of
all finite 
> approximation of the oracle, and (key point), the first person
indeterminacy will have 
> as domain, by definition of first person, all the UD computation where my
virtual brain 
> use the relevant (for my consciousness) part of the oracle.
>
> A machine can only access to finite parts of an oracle, in course of a
computation 
> requiring oracle, and so everything is fine.

That's how I imagine COMP instantiates the relation between the physical
world and 
consciousness; that the physical world acts like the oracle and provides
essential 
interactions with consciousness as a computational process. Of course that
doesn't 
require that the physical world be an oracle - it may be computable too.

Brent

>
> Of course, if we need the whole oracular sequence, in one step, then comp
would be just 
> false, and the brain need an infinite interface.
>
> The UD dovetails really on all programs, with all possible input, even
infinite non 
> computable one.
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
 
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
 
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-08-27 Thread Stathis Papaioannou
On Mon, Aug 27, 2012 at 9:07 PM, Roger Clough  wrote:
> Hi meekerdb
>
> IMHO I don't think that computers can have intelligence
> because intelligence consists of at least one ability:
> the ability to make autonomous choices (choices completely
> of one's own). Computers can do nothing on their own,
> they can only do what softward and harfdware tells them to do.

But people must also do only what their software and hardware tells
them to do. The hardware is the body and the software is the
configuration the hardware is placed in as a result of their exposure
to their environment.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-08-27 Thread John Clark
On Mon, Aug 27, 2012  Roger Clough  wrote:

>
> > I don't think that computers can have intelligence
>


But computers can solve equations better than you, play a game of chess
better than you, be a better research librarian than you and win more money
on Jeopardy than you; so it they don't have intelligence they apparently
have something better.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-27 Thread John Clark
On Sun, Aug 26, 2012  Craig Weinberg  wrote:

> A pendulum is only a metal rod. A clock is nothing but gears


"A brain is nothing but a glob of grey goo" says the robot.

> There is no clock sauce that makes this assembly a clock.


Yes there is, the clock sauce is the information on what the position of
the atoms in the clock should be in.

>> You engage in that manufacturing process for a reason or you do not
>> do so for a reason.
>>
>
> > You are the one who is relating everything to the idea of reasons, not
> me.
>

That is quite simply untrue, I never said everything happens for a reason!
I said everything happens for a reason OR everything does NOT happen for a
reason. Why this is supposed to be controversial escapes me.

> Why should we want to justify anything in the first place.


You tell me, you're the one who brought up justification.

> I would never assume that someone has no reason for their belief


Then assuming your beliefs are consistent (a gargantuan assumption I
admit)  you believe that the belief generator in the mind is as
deterministic as a cuckoo clock.


> > I don't think in terms of winning debates or proving their unworthiness
> to myself


Baloney.

   >> Yes, there are many astronomically complex reasons for a typhoon, so
>> I guess typhoons have free will.
>>
>
> > Are you being serious? Should we put typhoons on trial and punish them
> so that they will learn to stay away from our populated areas?
>

You tell me, you're the one going on and on about how the fact that reasons
can be complex has something to do with the "free will" noise.

> The cuckoo clock can't choose from among the many influences or choose to
> seek a new alternative, but I can.
>

You choose it because you liked it better than the alternative, so you made
the choice for a reason, and the mechanical bird jumped out of the clock at
noon for a reason too. For months now you have been chanting the word
"choose" as if it magically sweeps away all problems, it does not

> I made the reason.


And something caused you to make that reason or something did not cause you
to make that reason.  Cuckoo clock or roulette wheel.

> Reasoning is a process


Yes exactly, reasoning is a process, that is to say it is a series of steps
leading to a outcome, a very good example of that would be a computer
program.


> > Voluntary manslaughter is not an accident, it is unpremeditated murder.
> There is a difference.


A difference the law is unable to coherently explain which is why criminal
law is such a incredible muddle.

> It sounds like you are saying they [my opinions] are robotic, in which
> case there is no possibility that your robotic opinions could be any closer
> to an objective truth than my robotic opinions.
>

Not true. If steps in my reasoning have fewer random errors in them than
you have and the process does not start with axioms like "everything is
true and everything is false" then my robotic opinions will be closer to
the truth than your robotic opinions.

I> t's funny that you care about the free market but without any free
> agency to actually use it.


You could make a model approximating how the world economy will evolve if
you assume all 7 billion people are rational agents trying to maximize
their gain. It's only a approximation because some people are not rational
and "gain" can mean more than just making money, and even so that's far too
complex for even a supercomputer to calculate; so we must make due with a
simplified approximation of a simplified approximation of the real thing.
And I haven't even mentioned things like the weather, earthquakes and
technological progress which can strongly influence economies. The
communists thought they could figure all this out and history proved them
to be not just wrong but spectacularly wrong.

> The reason doesn't matter even if their was one.


Then I don't know what " doesn't matter" means because X caused Y and X did
not cause Y doesn't mean anything.

>The butterfly wing was the reason. Who cares.


I do.

> The point is that you can't approach the totality of the cosmos and
> consciousness as a mechanical problem.


True, but the totality of the cosmos and consciousness is a mechanical
problem or it is not a mechanical problem.


> > I don't think that having reasons or no reasons matters at all.


I believe that's true, that is what you think.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: The hypocracy of materialism

2012-08-27 Thread Richard Ruquist
John:  I think those arithmetical values must be implemented in matter to
become operational.

Richard: Agreed, as long as the compactified dimensions of string theory
are a form of matter and I am a crackpot.

On Mon, Aug 27, 2012 at 11:53 AM, John Clark  wrote:

> On Sun, Aug 26, 2012  Bruno Marchal  wrote:
>
> > A popular subproblem consists in explaining how a grey brain can
>> generate the subjective color An outline would be given by
>> 1) a theory of qualia. This just means some semi-axiomatic definition of
>> qualia, some agreement of what characterize them, etc. (For example: qualia
>> are subjective sensation
>>
>
>
> And subjective sensations are qualia. You need more than a dictionary list
> of synonyms and I have no idea how to get more. And if you're not clear
> about what you're trying to explain then your theory explaining that vague
> mush is unlikely to be any good.
>
> > 2) a theory of mind. this can be computationalism, or even just computer
>> science, or even just arithmetic + a supervenience thesis.
>>
>
> By "supervenience thesis" I assume you mean a theory explaining how lower
> level operations of a system, like the firing of neurons in the brain, can
> lead to higher level attributes like intelligence and consciousness. Well
> yes that's the name of the game and I can see how the quest for a
> intelligence theory would be genuine science; but the other would not be
> because consciousness theories are just too easy to crank out, out of the
> infinite number of potential consciousness theories there is no way to
> experimentally determine which one is correct. That is also why
> consciousness theories (but not intelligence theorys!) are so popular with
> crackpots.
>
> And its got to be more than just arithmetic. Numerical relationships
> always have and always will exist, but the mind of John K Clark has not and
> will not. I think those arithmetical values must be implemented in matter
> to become operational.
>
>
>> > 3) an embedding of the theory of qualia in the theory of mind,
>> respecting some faithfulness conditions.
>>
>
> Correct me if I'm wrong but I think you mean the use of induction to infer
> the structure of something from statistical data, but you have no data at
> all about the consciousness of anything except for that of Bruno Marchal
> and you can't develop a viable theory or even use induction with only one
> example.
>
> > Most religious belief, like the belief in the existence of primary
>> matter, or of mind, or God, etc, can be seen as attempt to clarify, or
>> hide, the mind-body problem.
>>
>
> Religion never EVER clarifies anything, it just adds pointless wheels
> within wheels to the problem of mind that is already complex enough as it
> is.
>
>   John K Clark
>
>
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



UNIVERSAL-DOVETAILER-ARGUMENT.HTML

2012-08-27 Thread Richard Ruquist
CLUB OF SUPPER CLUB 



http://clubofsc.blogspot.com/2011/08/my-topic-universal-dovetailer-argument.html
Copied
from white on black backgroundMONDAY, AUGUST 29, 2011
My topic - the Universal Dovetailer Argument
This month I'm breaking with tradition and actually *posting early!*This is
for two reasons, 1) because I have some time on my hands being sick at home
today and 2) because the topic may be more challenging than the average so
I'm giving you all some time to get acquainted with the material before we
discuss it. I'm going to try to be as concise as possible but a certain
level of verbosity is unavoidable.

First of all, let me make my own position on this theory clear. I don't
'believe' it as such. In fact I actively disbelieve it, for reasons I won't
go into here - I'll leave that for the day. Nevertheless it does open up
some pretty fascinating philosophical vistas, and ties into a whole bunch
of stuff including the dreaded Cryogenic Paradox, quantum theory, the
objective existence of numbers, the modern multiverse cosmology, and more.
As a backgrounder, I stumbled on this through a Google group called the
Everything List, a discussion group bound together by the concept that
'everything exists', which I'd been referred to by someone who left a
comment on my Cryogenic Paradox post.

But I promised concision, so onwards! The idea I want to present is called
the Universal Dovetailer Argument (UDA), and was proposed by a French
academic by the name of Bruno Marchal ,
who is a research fellow in AI at the Université Libre de
Bruxelles.
Here is Marchal's original paper on the
subject
(from
which I attempt to distill this summary). Unfortunately Marchal's
argumentation style is not exactly crystalline in its clarity for a
non-mathematician, a problem exacerbated by his inconsistent conjugation of
English verbs. Nevertheless, I think I've succeeded in deciphering 90% of
the gist. I'll tell you the bit I don't follow (and have my doubts about)
when I get there.

So, the UDA proposes itself as a refutation of materialism, and purports to
"reduce physics to fundamental machine psychology", by which I think he
means the mathematically defined laws by which computing devices (namely,
us) compute their self-consistent states (I'll explain). It relies on the
following assumptions:

   - That a brain (and a consciousness) can be substituted by a
   'Turing-machine' type computational device, i.e., we could theoretically
   have a brain transplant and not know the difference.
   - The 'Church Thesis', which basically says that one type of computer
   can always perfectly emulate another (contradicting my suspicion that my
   old Commodore 64 would struggle with Photoshop CS5, but if you fed the data
   through in small enough blocks, and could maintain state with enough
   memory...). We're talking mathematical inputs and outputs here, not the
   time taken nor the form of presentation.
   - Arithmetical Realism - namely the idea that mathematical relations and
   propositions "are true independently of me, you, humanity, the physical
   universe" etc.

These assumptions sum to what Marchal calls Classical Computationalism (or
just "comp" for brevity's sake). I can personally accept the second of
these assumptions, C64 notwithstanding. The first and last are a lot more
debatable, but as I said, we're "running with it".

So here goes the argument in 8 steps:

   1. "Comp" allows for teleportation, whereby a person is cut at one place
   and pasted at another. (I can't resist relating a joke at this point: Man
   standing in teleporter hears an announcement: "Your duplicate has been
   created at the destination, but due to a technical fault, there has been a
   delay in vaporization at this end. Please stand by...") Now the person at
   the end of the teleportation is a "consistent extension" of the one prior
   to teleportation, so Marchal assumes it is the same
   individual/consciousness. All we have done is add to the individual's
   belief set a new one relating to location. This is in fact irresistible
   unless we try to "stick" consciousness to the physical substrate of the
   "computation" that is the mind, but that violates assumptions 1 (and 2)
   above.
   2. Now let us imagine this person keeps a diary which gets teleported
   along with him or her and records what happens. A third-person account of
   the teleportation will at this stage be completely consistent with the
   diary (first person) account. The pronouns will be different, but the
   proposition will be logically identical ("Pierz was teleported from
   Melbourne to Sydney" vs "I was teleported from Melb to Syd"). But imagine
   there is now inserted a time delay between "departure" and "arrival" in the
   teleportation process. If the teleportee is not allowed reference to any
   exte

Re: The hypocracy of materialism

2012-08-27 Thread John Clark
On Sun, Aug 26, 2012  Bruno Marchal  wrote:

> A popular subproblem consists in explaining how a grey brain can generate
> the subjective color An outline would be given by
> 1) a theory of qualia. This just means some semi-axiomatic definition of
> qualia, some agreement of what characterize them, etc. (For example: qualia
> are subjective sensation
>


And subjective sensations are qualia. You need more than a dictionary list
of synonyms and I have no idea how to get more. And if you're not clear
about what you're trying to explain then your theory explaining that vague
mush is unlikely to be any good.

> 2) a theory of mind. this can be computationalism, or even just computer
> science, or even just arithmetic + a supervenience thesis.
>

By "supervenience thesis" I assume you mean a theory explaining how lower
level operations of a system, like the firing of neurons in the brain, can
lead to higher level attributes like intelligence and consciousness. Well
yes that's the name of the game and I can see how the quest for a
intelligence theory would be genuine science; but the other would not be
because consciousness theories are just too easy to crank out, out of the
infinite number of potential consciousness theories there is no way to
experimentally determine which one is correct. That is also why
consciousness theories (but not intelligence theorys!) are so popular with
crackpots.

And its got to be more than just arithmetic. Numerical relationships always
have and always will exist, but the mind of John K Clark has not and will
not. I think those arithmetical values must be implemented in matter to
become operational.


> > 3) an embedding of the theory of qualia in the theory of mind,
> respecting some faithfulness conditions.
>

Correct me if I'm wrong but I think you mean the use of induction to infer
the structure of something from statistical data, but you have no data at
all about the consciousness of anything except for that of Bruno Marchal
and you can't develop a viable theory or even use induction with only one
example.

> Most religious belief, like the belief in the existence of primary
> matter, or of mind, or God, etc, can be seen as attempt to clarify, or
> hide, the mind-body problem.
>

Religion never EVER clarifies anything, it just adds pointless wheels
within wheels to the problem of mind that is already complex enough as it
is.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-27 Thread Bruno Marchal


On 27 Aug 2012, at 15:32, Stephen P. King wrote:


On 8/27/2012 8:48 AM, Bruno Marchal wrote:


On 26 Aug 2012, at 21:59, Stephen P. King wrote:


On 8/26/2012 2:09 PM, Bruno Marchal wrote:


On 25 Aug 2012, at 15:12, benjayk wrote:




Bruno Marchal wrote:



On 24 Aug 2012, at 12:04, benjayk wrote:

But this avoides my point that we can't imagine that levels,  
context

and
ambiguity don't exist, and this is why computational emulation  
does

not mean
that the emulation can substitute the original.


But here you do a confusion level as I think Jason tries  
pointing on.


A similar one to the one made by Searle in the Chinese Room.

As emulator (computing machine) Robinson Arithmetic can simulate
exactly Peano Arithmetic, even as a prover. So for example  
Robinson
arithmetic can prove that Peano arithmetic proves the  
consistency of

Robinson Arithmetic.
But you cannot conclude from that that Robinson Arithmetic can  
prove
its own consistency. That would contradict Gödel II. When PA  
uses the
induction axiom, RA might just say "huh", and apply it for the  
sake of

the emulation without any inner conviction.
I agree, so I don't see how I confused the levels. It seems to  
me you have
just stated that Robinson indeed can not substitue Peano  
Arithmetic, because
RAs emulation of PA makes only sense with respect to PA (in  
cases were PA

does a proof that RA can't do).


Right. It makes only first person sense to PA. But then RA has  
succeeded in making PA alive, and PA could a posteriori realize  
that the RA level was enough.
Like I converse with Einstein's brain's book (à la Hofstatdter),  
just by manipulating the page of the book. I don't become  
Einstein through my making of that process, but I can have a  
genuine conversation with Einstein through it. He will know that  
he has survived, or that he survives through that process.


Dear Bruno,

  Please explain this statement! How is there an "Einstein" the  
person that will know anything in that case? How is such an entity  
capable of "knowing" anything that can be communicated? Surely you  
are not considering a consistently solipsistic version of  
Einstein! I don't have a problem with that possibility per se, but  
you must come clean about this!


What is the difference between processing the book with a brain, a  
computer, or a book? This is not step 8, it is step 0.  Or I miss  
what you are asking.


Dear Bruno,

   The question that I am asking is how you deal with multiple  
minds. SO far all of your discussion seems to assume only a single  
mind and, at most, a plurality of references to that one mind.


?

After a WM duplication there is already two minds. The first person  
plural handled the many minds.














That is, it *needs* PA to make sense, and so
we can't ultimately substitute one with the other (just in some  
relative

way, if we are using the result in the right way).


Yes, because that would be like substituting a person by another,  
pretexting they both obeys the same role. But comp substitute the  
lower process, not the high level one, which can indeed be quite  
different.


  Is there a spectrum or something similar to it for substitution  
levels?


There is a highest substituion level, above which you might still  
survive, but with some changes in your first person experience  
(that you can or not be aware of). Below that highest level, all  
levels are correct, I would say, by definition.


   OK. This seems to assume a background of the physical world...


Not at all. You need only a Turing universal system, and they abound  
in arithmetic.






If your level is the level of neurons, you can understand that if I  
simulate you ate the level of the elementary particles, I will  
automatically simulate you at the level of your neurons, and you  
will not see the difference (except for the price of the computer  
and memory, and other non relevant things like that). OK?


   Yes, but that is not my question. When you wrote "I don't become  
Einstein through my making of that process, but I can have a genuine  
conversation with Einstein through it. He will know that he has  
survived, or that he survives through that process" these seems to  
be the implications that the mind of Einstein and the mind of Bruno  
are not one and the same mind, at least in the sense that you can be  
come him merely by reading a book just changing your name.


Yes. comp has no problem with many minds.











It is like the word "apple" cannot really substitute a picture  
of an apple
in general (still less an actual apple), even though in many  
context we can
indeed use the word "apple" instead of using a picture of an  
apple because
we don't want to by shown how it looks, but just know that we  
talk about
apples - but we still need an actual apple or at least a picture  
to make

sense of it.


Here you make an invalid jump, I think. If I play chess on a  
computer, and make a backup of it, and then con

Re: Gödel theorem, the last vestige of magic Pythagorean mysticism.

2012-08-27 Thread Bruno Marchal


On 27 Aug 2012, at 15:15, Richard Ruquist wrote:


Is it true that real numbers are complete?



It is true that the first order theory of the real numbers, is complete.

This has been proven by Tarski.

Now add some trigonometric function axioms, and you are incomplete  
again, as the trigonometric functions will re-instantiate the natural  
numbers, by equation like sin(2PI*x) = 0.  To be short.


yes, from a first order logic perspective: the real numbers (R, + x)  
are simpler than (N, +, x), as the second are Turing universal, the  
first are not.


Bruno





Richard

On Mon, Aug 27, 2012 at 9:11 AM, Bruno Marchal   
wrote:


On 27 Aug 2012, at 11:47, Alberto G. Corona wrote:

Please don´t take my self confident style for absolute certainty. I  
just expose my ideas for discussion.


The fascination with which the Gödel theorem is taken may reflect   
the atmosphere of magic that invariably goes around anything for  
which there is a lack of understanding. In this case, with the  
addition of a supposed superiority of mathematicians over machines.


I have never really herad about a mathematician or a logician  
convinced by such an idea.




What Gödel discovered was that the set of true statements in  
mathematics, (in the subset of integer arithmetics) can not be  
demonstrated by a finite set of axioms. And to prove this, invented  
a way to discover new unprovable theorems, given any set of axioms,   
by means of an automatic procedure, called diagonalization, that the  
most basic interpreted program can perform. No more, no less.  
Although this was the end of the Hilbert idea.


OK.




What Penrose and others did is to find  a particular (altroug quite  
direct) translation of the Gódel theorem to an equivalent problem in  
terms  of a Turing machine where the machine


Translating Gödel in term of Turing machine is a well known  
pedagogical folklore in logic. It is already in the old book by  
Kleene, Davis, etc. It makes indeed things simpler, but sometimes it  
leads to misunderstanding, notably due to the common confusion  
between computing and proving.




does not perform the diagonalization and the set of axioms can not  
be extended.


That's the case for the enumeration of total computable functions,  
and is well known. I am not sure Penrose find anything new here.
Penrose just assumes that Gödel's theorem does not apply to us, and  
he assumes in particular than humans know that they are consistent,  
without justification. I agree with Penrose, but not for any form of  
formalisable knowledge. And this is true for machines too.




By restricting their reasoning to this kind of framework, Penrose  
demonstrates what eh want to demonstrate, the superiority of the  
mind, that is capable of doing a simple diagonalization.


IMHO, I do not find the Gödel theorem a limitation for computers. I  
think that Penrose and other did a right translation from the Gódel  
theorem to a  problem of a Turing machine,. But this translation can  
be done in a different way.


It is possible to design a program that modify itself by adding new  
axioms, the ones produced by the diagonalizations, so that the  
number of axioms can grow for any need. This is rutinely done for  
equivalent problems in rule-based expert systems or in ordinary  
interpreters (aided by humans) in complex domains. But reduced to  
integer aritmetics, A turing machine that implements a math proof  
system at the deep level, that is, in an interpreter where new  
axioms can be automatically added trough diagonalizations, may  
expand the set of know deductions by incorporating new axioms. This  
is not prohibited by the Gódel theorem. What is prohibited by such  
theorem is to know ALL true statements on this domain of integer  
mathematics. But this also apply to humans. But a computer can  
realize that a new axiom is absent in his initial set and to add it,  
Just like humans.


I do not see in this a limitation for human free will. I wrote about  
this before. The notion of free will based on the deterministc  
nature of the phisics or tcomputations is a degenerated, false  
problem which is an obsession of the Positivists.


Got the feeling I have already comment this. yes, Gödel's proof is  
constructive, and machines can use it to extend themselves, and John  
Myhill (and myself, and others) have exploited this in many ways.


Gödel's second incompleteness theorem has been generalized by Löb,  
and then Solovay has shown that the modal logical systems G and G*  
answer all the question at the modal propositional level. For  
example the second incompleteness theorem <>t -> ~[]<>t is a theorem  
of G, and <>t is a theorem of G*, etc.


Gödel's theorem is not an handicap for machine, on the contrary it  
prevents the world of numbers and machines from any normative or  
"totalitarian" (complete- theory about them. It shows that  
arithmetical truth, of computerland, is inexhaustible.  
Incompleteness is a chance fo

Re: Two reasons why computers IMHO cannot exhibit intelligence

2012-08-27 Thread Bruno Marchal


On 27 Aug 2012, at 13:07, Roger Clough wrote:


Hi meekerdb

IMHO I don't think that computers can have intelligence
because intelligence consists of at least one ability:
the ability to make autonomous choices (choices completely
of one's own). Computers can do nothing on their own,
they can only do what softward and harfdware tells them to do.

Another, closely related, reason, is that there must be an agent  
that does the choosing,

and IMHO the agent has to be separate from the system.
Godel, perhaps, I speculate.


I will never insist on this enough. All the Gödel's stuff shows that  
machines are very well suited for autonomy. In a sense, most of  
applied computer science is used to help controlling what can really  
become uncontrollable and too much autonomous, a bit like children  
education.


Computers are not stupid, we work a lot for making them so.

Bruno






Roger Clough, rclo...@verizon.net
8/27/2012
Leibniz would say, "If there's no God, we'd have to invent him so  
everything could function."

- Receiving the following content -
From: meekerdb
Receiver: everything-list
Time: 2012-08-26, 14:56:29
Subject: Re: Simple proof that our intelligence transcends that of  
computers


On 8/26/2012 10:25 AM, Bruno Marchal wrote:
>
> On 25 Aug 2012, at 12:35, Jason Resch wrote:
>
>>
>> I agree different implementations of intelligence have different  
capabilities and
>> roles, but I think computers are general enough to replicate any  
intelligence (so long

>> as infinities or true randomness are not required).
>
> And now a subtle point. Perhaps.
>
> The point is that computers are general enough to replicate  
intelligence EVEN if

> infinities and true randomness are required for it.
>
> Imagine that our consciousness require some ORACLE. For example  
under the form of a some
> non compressible sequence  
11101111011000110101011011... (say)

>
> Being incompressible, that sequence cannot be part of my brain at  
my substitution level,
> because this would make it impossible for the doctor to copy my  
brain into a finite
> string. So such sequence operates "outside my brain", and if the  
doctor copy me at the
> right comp level, he will reconstitute me with the right  
"interface" to the oracle, so I
> will survive and stay conscious, despite my consciousness depends  
on that oracle.

>
> Will the UD, just alone, or in arithmetic, be able to copy me in  
front of that oracle?

>
> Yes, as the UD dovetails on all programs, but also on all inputs,  
and in this case, he
> will generate me successively (with large delays in between) in  
front of all finite
> approximation of the oracle, and (key point), the first person  
indeterminacy will have
> as domain, by definition of first person, all the UD computation  
where my virtual brain

> use the relevant (for my consciousness) part of the oracle.
>
> A machine can only access to finite parts of an oracle, in course  
of a computation

> requiring oracle, and so everything is fine.

That's how I imagine COMP instantiates the relation between the  
physical world and
consciousness; that the physical world acts like the oracle and  
provides essential
interactions with consciousness as a computational process. Of  
course that doesn't
require that the physical world be an oracle - it may be computable  
too.


Brent

>
> Of course, if we need the whole oracular sequence, in one step,  
then comp would be just

> false, and the brain need an infinite interface.
>
> The UD dovetails really on all programs, with all possible input,  
even infinite non

> computable one.
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Is there a Sam Spencer on this list ?

2012-08-27 Thread Bruno Marchal
I guess it is a spam.  Also your address is rclo...@verizon.net, not rogclo...@verizon.net 
, like spencer's message.


Bruno


On 27 Aug 2012, at 12:55, Roger Clough wrote:


Is there a Sam Spencer on this list ?
He keeps sending out emails that appear
before opening to come from me,
but actually don't. His name, not mine, does however
appear when you open the email as below.


Roger Clough, rclo...@verizon.net
8/27/2012
Leibniz would say, "If there's no God, we'd have to invent him so  
everything could function."

- Receiving the following content -
From: Sam Spencer
Receiver: everything-list
Time: 2012-08-25, 08:22:32
Subject: Animation test

The attachments of the original message is as following:
  (1). inmasirne.gif

This is an animation test, please ignore.

-Sam Spencer

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A remark on Richard's paper

2012-08-27 Thread Bruno Marchal


On 27 Aug 2012, at 06:34, Russell Standish wrote:


On Sat, Aug 25, 2012 at 06:00:26PM +0200, Bruno Marchal wrote:

No. That was what I told him. But he left the place, simply, without
further comment, and quite disrespectfully. Many people were shocked
by this behavior, but said nothing. I think Chalmers is in part
responsible for the spreading of defamation I am living across the
ocean, and why nobody dares to mention the first person
indeterminacy, or my name.

I am afraid he has just been brainwashed by the main victims of a
manipulative form of moral harrasment., as I described in the book
ordered by Grasset in 1998 (but never published).


Well hopefully, we will remedy that soon, in English at least
:). There's half a chapter left to translate, and once Kim has a
chance at proof reading it, we should be able to get you a draft.



Good news. Nice. Thanks for telling me.





He is quite
plausibly a member of the same sect which put fraternity above
facts. It is a form of hidden corporatism.

I'm afraid Chalmers might be just an opportunist.


Who isn't! This is not a serious charge.


He is clearly not
a serious scientist, but seems to be an expert in self-marketing.


Sadly, one has to be, to be noticed.


His fading qualia paper is not so bad, but is hardly original, and
lacks many references. The hard problem of consciousness is know by
all philosophers of mind since a long time as the mind-body problem,
and his formulation is physicalist and not general, also.



I had lunch with him in early 2006 in Canberra, and this was after I
had sent him a draft of my ToN book, so he is well acquainted with the
ideas of this list. My impression was that he has a fairly fixed world
view, a steely mind capable of finding flaws in presentations of your
argument, and a general intolerance for woolly arguments. This is not
a bad thing, but I wasn't quite prepared for it at the time. The
subject material in ToN is quite convoluted, and to run through one
strand of it with someone like him is likely to run against the ground
of some differing conception of other. Marcus Hutter seems to think he
might be more predisposed to these ideas though.

Nevertheless, I find it hard to believe that he might be spreading
malicious gossip about you.


?
I have never said that. I said that he has been perhaps *victim* of  
gossip, about my work or me.


All what I say is that he pretended that there is no first person  
indeterminacy. Like John Clark he confused probably 1-views and 3- 
views, but unlike John Clark he has some notoriety in philosophy of  
mind, and is supposed to get that fundamental difference.


Unlike John Clark, but like Bill Taylor, (as anyone can verify) he did  
not answer the question I asked to him, so there is just no hope. It  
is impossible to communicate with people willingly deaf.


Chalmers is just not a scientist, period.
And *my* most charitable explanation of his behavior, is that he has  
been probably brainwashed by my usual opponents (which have dismissed  
UDA as being too much simple to be accepted as a subject of a thesis,  
but did not read it). Actually some people confided me that this has  
been the case, in more than once circle.
And my opponents in Brussels are not opponents to my work, as they did  
not read it (a fact that I can prove, actually), but they oppose me  
because I am witness of something. I don't want to talk about that  
now, and its is quite out of the topic of this list.


I have only pseudo-problem with those who does not take time to study  
the work, not with those reading it. With those who read it, I have  
the typical usual problem on technical points, and most of the time,  
it is because they are not familiar with elementary logic or with QM  
or with cognitive science, and they help me to improve the pedagogy.


I think the seven first steps are OK now, and that the step 8 can  
still be improved, so, as you know, I am interested in continuing to  
discuss it. But there is no need to understand step 8, to understand  
that the first person indeterminacy already change the common  
aristotelian picture about the mind-body, or the first-third person  
relationship, I think. Indeterminacy, non-cloning, and non-locality  
already follow from uda1-7.


It seems crazy for me how many computationalist philosophers neglect  
computer sciences, but this is due to the arbitrary cut between  
science and philosophy. My luck was to decide at the start to become a  
mathematical logician to be sure to be mathematically correct, and  
have the genuine form of language to handle comp, but then  
philosophers roar like if science was preparing to invade their  
territory, like in stone age, apparently.



Bruno



For one thing, I've never heard him, or
anyone else for that matter, even talk about your ideas, aside from
participants these mailing lists. More than likely, he dismisses you
as a harmless crank, and doesn't think about you at all. For all I
know, he may h

Re: Simple proof that our intelligence transcends that of computers

2012-08-27 Thread Stephen P. King

On 8/27/2012 8:48 AM, Bruno Marchal wrote:


On 26 Aug 2012, at 21:59, Stephen P. King wrote:


On 8/26/2012 2:09 PM, Bruno Marchal wrote:


On 25 Aug 2012, at 15:12, benjayk wrote:




Bruno Marchal wrote:



On 24 Aug 2012, at 12:04, benjayk wrote:


But this avoides my point that we can't imagine that levels, context
and
ambiguity don't exist, and this is why computational emulation does
not mean
that the emulation can substitute the original.


But here you do a confusion level as I think Jason tries pointing on.

A similar one to the one made by Searle in the Chinese Room.

As emulator (computing machine) Robinson Arithmetic can simulate
exactly Peano Arithmetic, even as a prover. So for example Robinson
arithmetic can prove that Peano arithmetic proves the consistency of
Robinson Arithmetic.
But you cannot conclude from that that Robinson Arithmetic can prove
its own consistency. That would contradict Gödel II. When PA uses the
induction axiom, RA might just say "huh", and apply it for the 
sake of

the emulation without any inner conviction.
I agree, so I don't see how I confused the levels. It seems to me 
you have
just stated that Robinson indeed can not substitue Peano 
Arithmetic, because
RAs emulation of PA makes only sense with respect to PA (in cases 
were PA

does a proof that RA can't do).


Right. It makes only first person sense to PA. But then RA has 
succeeded in making PA alive, and PA could a posteriori realize that 
the RA level was enough.
Like I converse with Einstein's brain's book (à la Hofstatdter), 
just by manipulating the page of the book. I don't become Einstein 
through my making of that process, but I can have a genuine 
conversation with Einstein through it. He will know that he has 
survived, or that he survives through that process.


Dear Bruno,

   Please explain this statement! How is there an "Einstein" the 
person that will know anything in that case? How is such an entity 
capable of "knowing" anything that can be communicated? Surely you 
are not considering a consistently solipsistic version of Einstein! I 
don't have a problem with that possibility per se, but you must come 
clean about this!


What is the difference between processing the book with a brain, a 
computer, or a book? This is not step 8, it is step 0.  Or I miss what 
you are asking.


Dear Bruno,

The question that I am asking is how you deal with multiple minds. 
SO far all of your discussion seems to assume only a single mind and, at 
most, a plurality of references to that one mind.










That is, it *needs* PA to make sense, and so
we can't ultimately substitute one with the other (just in some 
relative

way, if we are using the result in the right way).


Yes, because that would be like substituting a person by another, 
pretexting they both obeys the same role. But comp substitute the 
lower process, not the high level one, which can indeed be quite 
different.


   Is there a spectrum or something similar to it for substitution 
levels?


There is a highest substituion level, above which you might still 
survive, but with some changes in your first person experience (that 
you can or not be aware of). Below that highest level, all levels are 
correct, I would say, by definition.


OK. This seems to assume a background of the physical world...

If your level is the level of neurons, you can understand that if I 
simulate you ate the level of the elementary particles, I will 
automatically simulate you at the level of your neurons, and you will 
not see the difference (except for the price of the computer and 
memory, and other non relevant things like that). OK?


Yes, but that is not my question. When you wrote "I don't become 
Einstein through my making of that process, but I can have a genuine 
conversation with Einstein through it. He will know that he has 
survived, or that he survives through that process" these seems to be 
the implications that the mind of Einstein and the mind of Bruno are not 
one and the same mind, at least in the sense that you can be come him 
merely by reading a book just changing your name.








It is like the word "apple" cannot really substitute a picture of 
an apple
in general (still less an actual apple), even though in many 
context we can
indeed use the word "apple" instead of using a picture of an apple 
because
we don't want to by shown how it looks, but just know that we talk 
about
apples - but we still need an actual apple or at least a picture to 
make

sense of it.


Here you make an invalid jump, I think. If I play chess on a 
computer, and make a backup of it, and then continue on a totally 
different computer, you can see that I will be able to continue the 
same game with the same chess program, despite the computer is 
totally different. I have just to re-implement it correctly. Same 
with comp. Once we bet on the correct level, functionalism applies 
to that level and below, but not above (unless of course if I am 
wil

Re: Re: Re: What is the mind-body problem ? How do monads cause change ?

2012-08-27 Thread Roger Clough
Hi Richard Ruquist 

No, intelligence is a function of mind and hence is inextended.
Brain is extended.

Mind is the monad of brain if I may.


Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: Richard Ruquist 
Receiver: everything-list 
Time: 2012-08-27, 09:16:20
Subject: Re: Re: What is the mind-body problem ? How do monads cause change ?


Hence, both are extended. QED


On Mon, Aug 27, 2012 at 9:13 AM, Roger Clough  wrote:

Hi Richard Ruquist 
 
The more brain, the more mind.
 
 
 
Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: Richard Ruquist 
Receiver: everything-list 
Time: 2012-08-27, 09:09:24
Subject: Re: What is the mind-body problem ? How do monads cause change ?


Roger, 
If the mind were not extended,
then animal intelligence would not depend on brain size.
Richard


On Mon, Aug 27, 2012 at 8:39 AM, Roger Clough  wrote:

It has been asked here-- what in fact is the mind-body problem ? 

http://oregonstate.edu/instruct/phl302/writing/mind-top.html 


"The Mind Body Problem 

What philosophers call the mind body problem originated with Descartes. In 
Descartes' philosophy 
the mind is essentially a thinking thing, while the body is essentially an 
extended thing - something which occupies space. 
Descartes held that there is two way causal interaction between these two quite 
different kinds of substances. 
So, the body effects the mind in perception, and the mind effects the body in 
action. But how is this possible? 
How can an unextended thing effect something in space. How can something in 
space effect an unextended thing?" 
-
Immediately below I give an account of a man being pricked by a pin
in Leibniz's world versus such an action in the actual or phenomenal world.
In summary, and in addition,
1) They amount to the same account, one virtual and one actual or phenomenal.
2) Our so-called free will is only an apparent one.
3) Because monads overlap (are weakly nonlocal), since space is not a property,
monads?an have?ome limited, unconscious awareness of the rest of the universe 
(including all life).
This awareness is generally very weak and generally unconscious.
Still, it means that we are an intimate part of the universe and all that 
happens.
4) The virtual world of the monad of man strictly portrays men
as?lind, completely passive robots. However,?is monad 
is inside of the supreme monad,?hich is his puppet-master. 
But at the same time, then like as I recall Pinocchio, he
becomes seemingly alive in the everyday sense?hat we feel we are alive.
but through the supreme monad in which he is?ubordinately enclosed.
5) There is some bleed-through of future perceptions, so we can have
some dim awareness of future happenings.

I will just briefly?iscuss actions here by man. Each man is entirely virtual,
a monad in the space of thought containing a database of perceptions 
(given to him by God, of all the perceptions of the other monads in the 
universe. 
Some of these (animals) are mindless and others feelingless, 
with only have corporeal functions (plants, rocks) ).
Every monad has an internal source of energy, plus a pre-programed 
set?f?irtual perceptions continuously and instantaneously given to him by 
the Supreme Monad, and a set of virtual actions the monad is programmed
to virtually desire or will giving him new perceptions as well as every other
monad in the universe. 
All of these must function as virtual agents or entities according to Leibniz's 
principle of preestablished harmony. Only the supreme monad (God) can perceive,
feel, and act.
So if God wants you to be pricked by a pain, feel the pain,?nd react,
he will cause a virtual monadic pin to virtually prick your sensory monad,
and then have you virtually feel pain?s a monad, but actually to feel
a real pain in the phenomenal world, and to virtually jump and really
jump in both world, one virtually and one physically.
How does this differ
==
A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS
BPersonally, I am looking at the "how is this possible" aspect, 
first by asking what is possible from the aspect of Leibniz's metaphysics. 

What is possible is limited by Leibniz's monadology:

http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html

The principle issue is Leibniz's theory of causation. One account is given at

http://plato.stanford.edu/entries/leibniz-causation/


There seems to be some confusion and differing acounts?n how things happen,

but my own understanding is t

Re: Re: What is the mind-body problem ? How do monads cause change ?

2012-08-27 Thread Richard Ruquist
Hence, both are extended. QED

On Mon, Aug 27, 2012 at 9:13 AM, Roger Clough  wrote:

>  Hi Richard Ruquist
>
> The more brain, the more mind.
>
>
>
> Roger Clough, rclo...@verizon.net
> 8/27/2012
> Leibniz would say, "If there's no God, we'd have to invent him so
> everything could function."
>
> - Receiving the following content -
> *From:* Richard Ruquist 
> *Receiver:* everything-list 
> *Time:* 2012-08-27, 09:09:24
> *Subject:* Re: What is the mind-body problem ? How do monads cause change
> ?
>
>  Roger,
> If the mind were not extended,
> then animal intelligence would not depend on brain size.
> Richard
>
> On Mon, Aug 27, 2012 at 8:39 AM, Roger Clough  wrote:
>
>>  It has been asked here-- what in fact is the mind-body problem ?
>>
>> http://oregonstate.edu/instruct/phl302/writing/mind-top.html
>>
>>
>> "The Mind Body Problem
>>
>> What philosophers call the mind body problem originated with Descartes.
>> In Descartes' philosophy
>> the mind is essentially a thinking thing, while the body is essentially
>> an extended thing - something which occupies space.
>> Descartes held that there is two way causal interaction between these two
>> quite different kinds of substances.
>> So, the body effects the mind in perception, and the mind effects the
>> body in action. But how is this possible?
>> How can an unextended thing effect something in space. How can something
>> in space effect an unextended thing?"
>>
>> -
>> �
>> Immediately below I give an account of a man being pricked by a pin
>> in Leibniz's world versus such an action in the actual or phenomenal
>> world.
>> �
>> In summary, and in addition,
>> �
>> 1) They amount to the same account, one virtual and one actual or
>> phenomenal.
>> �
>> 2) Our so-called free will is only an apparent one.
>> �
>> 3) Because monads overlap (are weakly nonlocal), since space is not a
>> property,
>> monads燾an have爏ome limited, unconscious awareness of the rest of the
>> universe (including all life).
>> This awareness is generally very weak and generally unconscious.
>> Still, it means that we are an intimate part of the universe and all that
>> happens.
>> �
>> 4) The virtual world of the monad of man strictly portrays men
>> as燽lind, completely passive robots. However,爃is monad
>> is inside of the supreme monad,爓hich is his puppet-master.
>> But at the same time, then like as I recall Pinocchio, he
>> becomes seemingly alive in the everyday sense爐hat we feel we are alive.
>> but through the supreme monad in which he is爏ubordinately enclosed.
>> �
>> 5) There is some bleed-through of future perceptions, so we can have
>> some dim awareness of future happenings.
>> �
>> �
>> �
>> �
>>
>> 
>> �
>> �
>> �
>> I will just briefly燿iscuss actions here by man. Each man is entirely
>> virtual,
>> a monad in the space of thought containing a database of perceptions
>> (given to him by God, of all the perceptions of the other monads in the
>> universe.�
>> Some of these (animals) are mindless and others feelingless,�
>> with only have corporeal functions (plants, rocks)�).
>> �
>> Every monad� has an internal source of energy, plus a pre-programed
>> set爋f爒irtual perceptions continuously and instantaneously given to him by
>> the Supreme Monad, and a set of virtual actions the monad is programmed
>> to virtually desire or will giving him new perceptions as well as every
>> other
>> monad in the universe.�
>> �
>>  All of these must function as virtual agents or entities according to
>> Leibniz's
>> principle of preestablished harmony. Only the supreme monad (God) can
>> perceive,
>> feel, and act.
>> �
>> �
>> So if God wants you to be pricked by a pain, feel the pain,燼nd react,
>> he will cause a virtual monadic pin to virtually prick your sensory monad,
>> and then have you virtually feel pain燼s a monad, but actually to feel
>> a real pain in the phenomenal world, and to virtually jump and really
>> jump in both world, one virtually and one physically.
>> �
>> �
>> �
>> How does this differ
>> �
>> �
>> �
>> ==
>> A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS
>> �
>> BPersonally, I am looking at the "how is this possible" aspect,
>> first by asking what is possible from the aspect of Leibniz's
>> metaphysics.
>>
>> What is possible is limited by Leibniz's monadology:
>>
>> http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html
>>
>> The principle issue is Leibniz's theory of causation. One account is
>> given at
>>
>> http://plato.stanford.edu/entries/leibniz-causation/
>>
>> There seems to be some confusion and differing acounts爋n how things
>> happen,
>>
>> but my own understanding is that:
>>
>> 1). All simple substances are monads, or which there are 3 types,
>> those just con

Re: Gödel theorem, the last vestige of magic Pythagorean mysticism.

2012-08-27 Thread Richard Ruquist
Is it true that real numbers are complete?
Richard

On Mon, Aug 27, 2012 at 9:11 AM, Bruno Marchal  wrote:

>
> On 27 Aug 2012, at 11:47, Alberto G. Corona wrote:
>
>  Please don´t take my self confident style for absolute certainty. I just
>> expose my ideas for discussion.
>>
>> The fascination with which the Gödel theorem is taken may reflect  the
>> atmosphere of magic that invariably goes around anything for which there is
>> a lack of understanding. In this case, with the addition of a supposed
>> superiority of mathematicians over machines.
>>
>
> I have never really herad about a mathematician or a logician convinced by
> such an idea.
>
>
>
>> What Gödel discovered was that the set of true statements in mathematics,
>> (in the subset of integer arithmetics) can not be demonstrated by a finite
>> set of axioms. And to prove this, invented a way to discover new unprovable
>> theorems, given any set of axioms,  by means of an automatic procedure,
>> called diagonalization, that the most basic interpreted program can
>> perform. No more, no less. Although this was the end of the Hilbert idea.
>>
>
> OK.
>
>
>
>
>> What Penrose and others did is to find  a particular (altroug quite
>> direct) translation of the Gódel theorem to an equivalent problem in terms
>>  of a Turing machine where the machine
>>
>
> Translating Gödel in term of Turing machine is a well known pedagogical
> folklore in logic. It is already in the old book by Kleene, Davis, etc. It
> makes indeed things simpler, but sometimes it leads to misunderstanding,
> notably due to the common confusion between computing and proving.
>
>
>
>  does not perform the diagonalization and the set of axioms can not be
>> extended.
>>
>
> That's the case for the enumeration of total computable functions, and is
> well known. I am not sure Penrose find anything new here.
> Penrose just assumes that Gödel's theorem does not apply to us, and he
> assumes in particular than humans know that they are consistent, without
> justification. I agree with Penrose, but not for any form of formalisable
> knowledge. And this is true for machines too.
>
>
>
>  By restricting their reasoning to this kind of framework, Penrose
>> demonstrates what eh want to demonstrate, the superiority of the mind, that
>> is capable of doing a simple diagonalization.
>>
>> IMHO, I do not find the Gödel theorem a limitation for computers. I think
>> that Penrose and other did a right translation from the Gódel theorem to a
>>  problem of a Turing machine,. But this translation can be done in a
>> different way.
>>
>> It is possible to design a program that modify itself by adding new
>> axioms, the ones produced by the diagonalizations, so that the number of
>> axioms can grow for any need. This is rutinely done for equivalent problems
>> in rule-based expert systems or in ordinary interpreters (aided by humans)
>> in complex domains. But reduced to integer aritmetics, A turing machine
>> that implements a math proof system at the deep level, that is, in an
>> interpreter where new axioms can be automatically added trough
>> diagonalizations, may expand the set of know deductions by incorporating
>> new axioms. This is not prohibited by the Gódel theorem. What is prohibited
>> by such theorem is to know ALL true statements on this domain of integer
>> mathematics. But this also apply to humans. But a computer can realize that
>> a new axiom is absent in his initial set and to add it, Just like humans.
>>
>> I do not see in this a limitation for human free will. I wrote about this
>> before. The notion of free will based on the deterministc nature of the
>> phisics or tcomputations is a degenerated, false problem which is an
>> obsession of the Positivists.
>>
>
> Got the feeling I have already comment this. yes, Gödel's proof is
> constructive, and machines can use it to extend themselves, and John Myhill
> (and myself, and others) have exploited this in many ways.
>
> Gödel's second incompleteness theorem has been generalized by Löb, and
> then Solovay has shown that the modal logical systems G and G* answer all
> the question at the modal propositional level. For example the second
> incompleteness theorem <>t -> ~[]<>t is a theorem of G, and <>t is a
> theorem of G*, etc.
>
> Gödel's theorem is not an handicap for machine, on the contrary it
> prevents the world of numbers and machines from any normative or
> "totalitarian" (complete- theory about them. It shows that arithmetical
> truth, of computerland, is inexhaustible. Incompleteness is a chance for
> mechanism, as Judson Webb already argued.
>
> Bruno
>
> http://iridia.ulb.ac.be/~**marchal/ 
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> To unsubscribe from this group, send email to everything-list+unsubscribe@
> **googlegroups.com .
> For mor

Re: Re: What is the mind-body problem ? How do monads cause change ?

2012-08-27 Thread Roger Clough
Hi Richard Ruquist 

The more brain, the more mind.



Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: Richard Ruquist 
Receiver: everything-list 
Time: 2012-08-27, 09:09:24
Subject: Re: What is the mind-body problem ? How do monads cause change ?


Roger,
If the mind were not extended,
then animal intelligence would not depend on brain size.
Richard


On Mon, Aug 27, 2012 at 8:39 AM, Roger Clough  wrote:

It has been asked here-- what in fact is the mind-body problem ? 

http://oregonstate.edu/instruct/phl302/writing/mind-top.html 


"The Mind Body Problem 

What philosophers call the mind body problem originated with Descartes. In 
Descartes' philosophy 
the mind is essentially a thinking thing, while the body is essentially an 
extended thing - something which occupies space. 
Descartes held that there is two way causal interaction between these two quite 
different kinds of substances. 
So, the body effects the mind in perception, and the mind effects the body in 
action. But how is this possible? 
How can an unextended thing effect something in space. How can something in 
space effect an unextended thing?" 
-
?
Immediately below I give an account of a man being pricked by a pin
in Leibniz's world versus such an action in the actual or phenomenal world.
?
In summary, and in addition,
?
1) They amount to the same account, one virtual and one actual or phenomenal.
?
2) Our so-called free will is only an apparent one.
?
3) Because monads overlap (are weakly nonlocal), since space is not a property,
monads?an have?ome limited, unconscious awareness of the rest of the universe 
(including all life).
This awareness is generally very weak and generally unconscious.
Still, it means that we are an intimate part of the universe and all that 
happens.
?
4) The virtual world of the monad of man strictly portrays men
as?lind, completely passive robots. However,?is monad 
is inside of the supreme monad,?hich is his puppet-master. 
But at the same time, then like as I recall Pinocchio, he
becomes seemingly alive in the everyday sense?hat we feel we are alive.
but through the supreme monad in which he is?ubordinately enclosed.
?
5) There is some bleed-through of future perceptions, so we can have
some dim awareness of future happenings.
?
?
?
?

?
?
?
I will just briefly?iscuss actions here by man. Each man is entirely virtual,
a monad in the space of thought containing a database of perceptions 
(given to him by God, of all the perceptions of the other monads in the 
universe.? 
Some of these (animals) are mindless and others feelingless,?
with only have corporeal functions (plants, rocks)?).
?
Every monad? has an internal source of energy, plus a pre-programed 
set?f?irtual perceptions continuously and instantaneously given to him by 
the Supreme Monad, and a set of virtual actions the monad is programmed
to virtually desire or will giving him new perceptions as well as every other
monad in the universe.?
?
All of these must function as virtual agents or entities according to Leibniz's 
principle of preestablished harmony. Only the supreme monad (God) can perceive,
feel, and act.
?
?
So if God wants you to be pricked by a pain, feel the pain,?nd react,
he will cause a virtual monadic pin to virtually prick your sensory monad,
and then have you virtually feel pain?s a monad, but actually to feel
a real pain in the phenomenal world, and to virtually jump and really
jump in both world, one virtually and one physically.
?
?
?
How does this differ
?
?
?
==
A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS
?
BPersonally, I am looking at the "how is this possible" aspect, 
first by asking what is possible from the aspect of Leibniz's metaphysics. 

What is possible is limited by Leibniz's monadology:

http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html

The principle issue is Leibniz's theory of causation. One account is given at

http://plato.stanford.edu/entries/leibniz-causation/

There seems to be some confusion and differing acounts?n how things happen,
but my own understanding is that:

1). All simple substances are monads, or which there are 3 types,
those just containing bodily perceptions (rocks, vegetables), 
those containing affective perceptions as well (animals) and those (man)
which also have mental perceptions (ie all things mental). 

2. Monads can do nothing or perceive anything on their own, but only through 
God 
(the supreme monad) according to our desires, which are actually God's
?

3) All of the actions of lesser monads and the supreme monad God have been 
scripted
in the Preestabli

Re: Gödel theorem, the last vestige of magic Pythagorean mysticism.

2012-08-27 Thread Bruno Marchal


On 27 Aug 2012, at 11:47, Alberto G. Corona wrote:

Please don´t take my self confident style for absolute certainty. I  
just expose my ideas for discussion.


The fascination with which the Gödel theorem is taken may reflect   
the atmosphere of magic that invariably goes around anything for  
which there is a lack of understanding. In this case, with the  
addition of a supposed superiority of mathematicians over machines.


I have never really herad about a mathematician or a logician  
convinced by such an idea.





What Gödel discovered was that the set of true statements in  
mathematics, (in the subset of integer arithmetics) can not be  
demonstrated by a finite set of axioms. And to prove this, invented  
a way to discover new unprovable theorems, given any set of axioms,   
by means of an automatic procedure, called diagonalization, that the  
most basic interpreted program can perform. No more, no less.  
Although this was the end of the Hilbert idea.


OK.





What Penrose and others did is to find  a particular (altroug quite  
direct) translation of the Gódel theorem to an equivalent problem in  
terms  of a Turing machine where the machine


Translating Gödel in term of Turing machine is a well known  
pedagogical folklore in logic. It is already in the old book by  
Kleene, Davis, etc. It makes indeed things simpler, but sometimes it  
leads to misunderstanding, notably due to the common confusion between  
computing and proving.




does not perform the diagonalization and the set of axioms can not  
be extended.


That's the case for the enumeration of total computable functions, and  
is well known. I am not sure Penrose find anything new here.
Penrose just assumes that Gödel's theorem does not apply to us, and he  
assumes in particular than humans know that they are consistent,  
without justification. I agree with Penrose, but not for any form of  
formalisable knowledge. And this is true for machines too.




By restricting their reasoning to this kind of framework, Penrose  
demonstrates what eh want to demonstrate, the superiority of the  
mind, that is capable of doing a simple diagonalization.


IMHO, I do not find the Gödel theorem a limitation for computers. I  
think that Penrose and other did a right translation from the Gódel  
theorem to a  problem of a Turing machine,. But this translation can  
be done in a different way.


It is possible to design a program that modify itself by adding new  
axioms, the ones produced by the diagonalizations, so that the  
number of axioms can grow for any need. This is rutinely done for  
equivalent problems in rule-based expert systems or in ordinary  
interpreters (aided by humans) in complex domains. But reduced to  
integer aritmetics, A turing machine that implements a math proof  
system at the deep level, that is, in an interpreter where new  
axioms can be automatically added trough diagonalizations, may  
expand the set of know deductions by incorporating new axioms. This  
is not prohibited by the Gódel theorem. What is prohibited by such  
theorem is to know ALL true statements on this domain of integer  
mathematics. But this also apply to humans. But a computer can  
realize that a new axiom is absent in his initial set and to add it,  
Just like humans.


I do not see in this a limitation for human free will. I wrote about  
this before. The notion of free will based on the deterministc  
nature of the phisics or tcomputations is a degenerated, false  
problem which is an obsession of the Positivists.


Got the feeling I have already comment this. yes, Gödel's proof is  
constructive, and machines can use it to extend themselves, and John  
Myhill (and myself, and others) have exploited this in many ways.


Gödel's second incompleteness theorem has been generalized by Löb, and  
then Solovay has shown that the modal logical systems G and G* answer  
all the question at the modal propositional level. For example the  
second incompleteness theorem <>t -> ~[]<>t is a theorem of G, and <>t  
is a theorem of G*, etc.


Gödel's theorem is not an handicap for machine, on the contrary it  
prevents the world of numbers and machines from any normative or  
"totalitarian" (complete- theory about them. It shows that  
arithmetical truth, of computerland, is inexhaustible. Incompleteness  
is a chance for mechanism, as Judson Webb already argued.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: What is the mind-body problem ? How do monads cause change ?

2012-08-27 Thread Richard Ruquist
Roger,
If the mind were not extended,
then animal intelligence would not depend on brain size.
Richard

On Mon, Aug 27, 2012 at 8:39 AM, Roger Clough  wrote:

>  It has been asked here-- what in fact is the mind-body problem ?
>
> http://oregonstate.edu/instruct/phl302/writing/mind-top.html
>
>
> "The Mind Body Problem
>
> What philosophers call the mind body problem originated with Descartes. In
> Descartes' philosophy
> the mind is essentially a thinking thing, while the body is essentially an
> extended thing - something which occupies space.
> Descartes held that there is two way causal interaction between these two
> quite different kinds of substances.
> So, the body effects the mind in perception, and the mind effects the body
> in action. But how is this possible?
> How can an unextended thing effect something in space. How can something
> in space effect an unextended thing?"
>
> -
>
> Immediately below I give an account of a man being pricked by a pin
> in Leibniz's world versus such an action in the actual or phenomenal world.
>
> In summary, and in addition,
>
> 1) They amount to the same account, one virtual and one actual or
> phenomenal.
>
> 2) Our so-called free will is only an apparent one.
>
> 3) Because monads overlap (are weakly nonlocal), since space is not a
> property,
> monads can have some limited, unconscious awareness of the rest of the
> universe (including all life).
> This awareness is generally very weak and generally unconscious.
> Still, it means that we are an intimate part of the universe and all that
> happens.
>
> 4) The virtual world of the monad of man strictly portrays men
> as blind, completely passive robots. However, his monad
> is inside of the supreme monad, which is his puppet-master.
> But at the same time, then like as I recall Pinocchio, he
> becomes seemingly alive in the everyday sense that we feel we are alive.
> but through the supreme monad in which he is subordinately enclosed.
>
> 5) There is some bleed-through of future perceptions, so we can have
> some dim awareness of future happenings.
>
>
>
>
>
> 
>
>
>
> I will just briefly discuss actions here by man. Each man is entirely
> virtual,
> a monad in the space of thought containing a database of perceptions
> (given to him by God, of all the perceptions of the other monads in the
> universe.
> Some of these (animals) are mindless and others feelingless,
> with only have corporeal functions (plants, rocks) ).
>
> Every monad  has an internal source of energy, plus a pre-programed
> set of virtual perceptions continuously and instantaneously given to him
> by
> the Supreme Monad, and a set of virtual actions the monad is programmed
> to virtually desire or will giving him new perceptions as well as every
> other
> monad in the universe.
>
>  All of these must function as virtual agents or entities according to
> Leibniz's
> principle of preestablished harmony. Only the supreme monad (God) can
> perceive,
> feel, and act.
>
>
> So if God wants you to be pricked by a pain, feel the pain, and react,
> he will cause a virtual monadic pin to virtually prick your sensory monad,
> and then have you virtually feel pain as a monad, but actually to feel
> a real pain in the phenomenal world, and to virtually jump and really
> jump in both world, one virtually and one physically.
>
>
>
> How does this differ
>
>
>
> ==
> A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS
>
> BPersonally, I am looking at the "how is this possible" aspect,
> first by asking what is possible from the aspect of Leibniz's metaphysics.
>
> What is possible is limited by Leibniz's monadology:
>
> http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html
>
> The principle issue is Leibniz's theory of causation. One account is given
> at
>
> http://plato.stanford.edu/entries/leibniz-causation/
>
> There seems to be some confusion and differing acounts on how things
> happen,
> but my own understanding is that:
>
> 1). All simple substances are monads, or which there are 3 types,
> those just containing bodily perceptions (rocks, vegetables),
> those containing affective perceptions as well (animals) and those (man)
> which also have mental perceptions (ie all things mental).
>
> 2. Monads can do nothing or perceive anything on their own, but only
> through God
> (the supreme monad) according to our desires, which are actually God's
>
>
> 3) All of the actions of lesser monads and the supreme monad God have been
> scripted
> in the Preestablished Harmony.
>
> 4) Thus causation is virtual, say like in a silent movie. No actual forces
> are involved,
> only virtual forces.
>
> 5)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Roger Clough, rclo...@verizon.net <

Re: Simple proof that our intelligence transcends that of computers

2012-08-27 Thread Bruno Marchal


On 26 Aug 2012, at 21:59, Stephen P. King wrote:


On 8/26/2012 2:09 PM, Bruno Marchal wrote:


On 25 Aug 2012, at 15:12, benjayk wrote:




Bruno Marchal wrote:



On 24 Aug 2012, at 12:04, benjayk wrote:

But this avoides my point that we can't imagine that levels,  
context

and
ambiguity don't exist, and this is why computational emulation  
does

not mean
that the emulation can substitute the original.


But here you do a confusion level as I think Jason tries pointing  
on.


A similar one to the one made by Searle in the Chinese Room.

As emulator (computing machine) Robinson Arithmetic can simulate
exactly Peano Arithmetic, even as a prover. So for example Robinson
arithmetic can prove that Peano arithmetic proves the consistency  
of

Robinson Arithmetic.
But you cannot conclude from that that Robinson Arithmetic can  
prove
its own consistency. That would contradict Gödel II. When PA uses  
the
induction axiom, RA might just say "huh", and apply it for the  
sake of

the emulation without any inner conviction.
I agree, so I don't see how I confused the levels. It seems to me  
you have
just stated that Robinson indeed can not substitue Peano  
Arithmetic, because
RAs emulation of PA makes only sense with respect to PA (in cases  
were PA

does a proof that RA can't do).


Right. It makes only first person sense to PA. But then RA has  
succeeded in making PA alive, and PA could a posteriori realize  
that the RA level was enough.
Like I converse with Einstein's brain's book (à la Hofstatdter),  
just by manipulating the page of the book. I don't become Einstein  
through my making of that process, but I can have a genuine  
conversation with Einstein through it. He will know that he has  
survived, or that he survives through that process.


Dear Bruno,

   Please explain this statement! How is there an "Einstein" the  
person that will know anything in that case? How is such an entity  
capable of "knowing" anything that can be communicated? Surely you  
are not considering a consistently solipsistic version of Einstein!  
I don't have a problem with that possibility per se, but you must  
come clean about this!


What is the difference between processing the book with a brain, a  
computer, or a book? This is not step 8, it is step 0.  Or I miss what  
you are asking.









That is, it *needs* PA to make sense, and so
we can't ultimately substitute one with the other (just in some  
relative

way, if we are using the result in the right way).


Yes, because that would be like substituting a person by another,  
pretexting they both obeys the same role. But comp substitute the  
lower process, not the high level one, which can indeed be quite  
different.


   Is there a spectrum or something similar to it for substitution  
levels?


There is a highest substituion level, above which you might still  
survive, but with some changes in your first person experience (that  
you can or not be aware of). Below that highest level, all levels are  
correct, I would say, by definition.
If your level is the level of neurons, you can understand that if I  
simulate you ate the level of the elementary particles, I will  
automatically simulate you at the level of your neurons, and you will  
not see the difference (except for the price of the computer and  
memory, and other non relevant things like that). OK?








It is like the word "apple" cannot really substitute a picture of  
an apple
in general (still less an actual apple), even though in many  
context we can
indeed use the word "apple" instead of using a picture of an apple  
because
we don't want to by shown how it looks, but just know that we talk  
about
apples - but we still need an actual apple or at least a picture  
to make

sense of it.


Here you make an invalid jump, I think. If I play chess on a  
computer, and make a backup of it, and then continue on a totally  
different computer, you can see that I will be able to continue the  
same game with the same chess program, despite the computer is  
totally different. I have just to re-implement it correctly. Same  
with comp. Once we bet on the correct level, functionalism applies  
to that level and below, but not above (unless of course if I am  
willing to have some change in my consciousness, like amnesia, etc.).


   But this example implies the necessity of the possibility of a  
physical implementation,


In which modal logic?



what is universal is that not a particular physical system is  
required for the chess program.




With comp, to make things simple, we are high level programs. Their  
doing is 100* emulable by any computer, by definition of programs  
and computers.


   I agree with this, but any thing that implies interactions  
between separate minds implies seperation of implementations and  
this only happens in the physical realm.


No, this is not correct. You fail to appreciate that all  
implementations and interactions are already emulated in arithmeti

What is the mind-body problem ? How do monads cause change ?

2012-08-27 Thread Roger Clough
It has been asked here-- what in fact is the mind-body problem ? 

http://oregonstate.edu/instruct/phl302/writing/mind-top.html 


"The Mind Body Problem 

What philosophers call the mind body problem originated with Descartes. In 
Descartes' philosophy 
the mind is essentially a thinking thing, while the body is essentially an 
extended thing - something which occupies space. 
Descartes held that there is two way causal interaction between these two quite 
different kinds of substances. 
So, the body effects the mind in perception, and the mind effects the body in 
action. But how is this possible? 
How can an unextended thing effect something in space. How can something in 
space effect an unextended thing?" 
-

Immediately below I give an account of a man being pricked by a pin
in Leibniz's world versus such an action in the actual or phenomenal world.

In summary, and in addition,

1) They amount to the same account, one virtual and one actual or phenomenal.

2) Our so-called free will is only an apparent one.

3) Because monads overlap (are weakly nonlocal), since space is not a property,
monads can have some limited, unconscious awareness of the rest of the universe 
(including all life).
This awareness is generally very weak and generally unconscious.
Still, it means that we are an intimate part of the universe and all that 
happens.

4) The virtual world of the monad of man strictly portrays men
as blind, completely passive robots. However, his monad 
is inside of the supreme monad, which is his puppet-master. 
But at the same time, then like as I recall Pinocchio, he
becomes seemingly alive in the everyday sense that we feel we are alive.
but through the supreme monad in which he is subordinately enclosed.

5) There is some bleed-through of future perceptions, so we can have
some dim awareness of future happenings.








I will just briefly discuss actions here by man. Each man is entirely virtual,
a monad in the space of thought containing a database of perceptions 
(given to him by God, of all the perceptions of the other monads in the 
universe.  
Some of these (animals) are mindless and others feelingless, 
with only have corporeal functions (plants, rocks) ).

Every monad  has an internal source of energy, plus a pre-programed 
set of virtual perceptions continuously and instantaneously given to him by 
the Supreme Monad, and a set of virtual actions the monad is programmed
to virtually desire or will giving him new perceptions as well as every other
monad in the universe. 

All of these must function as virtual agents or entities according to Leibniz's 
principle of preestablished harmony. Only the supreme monad (God) can perceive,
feel, and act.


So if God wants you to be pricked by a pain, feel the pain, and react,
he will cause a virtual monadic pin to virtually prick your sensory monad,
and then have you virtually feel pain as a monad, but actually to feel
a real pain in the phenomenal world, and to virtually jump and really
jump in both world, one virtually and one physically.



How does this differ



==
A MORE COMPLETE ACCOUNT OF CAUSATION BY MONADS

BPersonally, I am looking at the "how is this possible" aspect, 
first by asking what is possible from the aspect of Leibniz's metaphysics. 

What is possible is limited by Leibniz's monadology:

http://www.philosophy.leeds.ac.uk/GMR/moneth/monadology.html

The principle issue is Leibniz's theory of causation. One account is given at

http://plato.stanford.edu/entries/leibniz-causation/

There seems to be some confusion and differing acounts on how things happen,
but my own understanding is that:

1). All simple substances are monads, or which there are 3 types,
those just containing bodily perceptions (rocks, vegetables), 
those containing affective perceptions as well (animals) and those (man)
which also have mental perceptions (ie all things mental). 

2. Monads can do nothing or perceive anything on their own, but only through 
God 
(the supreme monad) according to our desires, which are actually God's
 

3) All of the actions of lesser monads and the supreme monad God have been 
scripted
in the Preestablished Harmony. 

4) Thus causation is virtual, say like in a silent movie. No actual forces are 
involved,
only virtual forces. 

5) 



























Roger Clough, rclo...@verizon.net 
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function." 
- Receiving the following content - 
From: benjayk 
Receiver: everything-list 
Time: 2012-08-25, 11:16:59 
Subject: Re: Simple proof that our intelligence transcends that of computers 


I am getting a bit tired of our discussion, so I will just adress th

Two reasons why computers IMHO cannot exhibit intelligence

2012-08-27 Thread Roger Clough
Hi meekerdb 

IMHO I don't think that computers can have intelligence
because intelligence consists of at least one ability:
the ability to make autonomous choices (choices completely
of one's own). Computers can do nothing on their own,
they can only do what softward and harfdware tells them to do. 

Another, closely related, reason, is that there must be an agent that does the 
choosing,
and IMHO the agent has to be separate from the system.
Godel, perhaps, I speculate. 


Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: meekerdb 
Receiver: everything-list 
Time: 2012-08-26, 14:56:29
Subject: Re: Simple proof that our intelligence transcends that of computers


On 8/26/2012 10:25 AM, Bruno Marchal wrote:
>
> On 25 Aug 2012, at 12:35, Jason Resch wrote:
>
>>
>> I agree different implementations of intelligence have different 
>> capabilities and 
>> roles, but I think computers are general enough to replicate any 
>> intelligence (so long 
>> as infinities or true randomness are not required).
>
> And now a subtle point. Perhaps.
>
> The point is that computers are general enough to replicate intelligence EVEN 
> if 
> infinities and true randomness are required for it.
>
> Imagine that our consciousness require some ORACLE. For example under the 
> form of a some 
> non compressible sequence 11101111011000110101011011... (say)
>
> Being incompressible, that sequence cannot be part of my brain at my 
> substitution level, 
> because this would make it impossible for the doctor to copy my brain into a 
> finite 
> string. So such sequence operates "outside my brain", and if the doctor copy 
> me at the 
> right comp level, he will reconstitute me with the right "interface" to the 
> oracle, so I 
> will survive and stay conscious, despite my consciousness depends on that 
> oracle.
>
> Will the UD, just alone, or in arithmetic, be able to copy me in front of 
> that oracle?
>
> Yes, as the UD dovetails on all programs, but also on all inputs, and in this 
> case, he 
> will generate me successively (with large delays in between) in front of all 
> finite 
> approximation of the oracle, and (key point), the first person indeterminacy 
> will have 
> as domain, by definition of first person, all the UD computation where my 
> virtual brain 
> use the relevant (for my consciousness) part of the oracle.
>
> A machine can only access to finite parts of an oracle, in course of a 
> computation 
> requiring oracle, and so everything is fine.

That's how I imagine COMP instantiates the relation between the physical world 
and 
consciousness; that the physical world acts like the oracle and provides 
essential 
interactions with consciousness as a computational process. Of course that 
doesn't 
require that the physical world be an oracle - it may be computable too.

Brent

>
> Of course, if we need the whole oracular sequence, in one step, then comp 
> would be just 
> false, and the brain need an infinite interface.
>
> The UD dovetails really on all programs, with all possible input, even 
> infinite non 
> computable one.
>
> Bruno
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Is there a Sam Spencer on this list ?

2012-08-27 Thread Roger Clough
Is there a Sam Spencer on this list ?
He keeps sending out emails that appear 
before opening to come from me,
but actually don't. His name, not mine, does however
appear when you open the email as below.


Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: Sam Spencer 
Receiver: everything-list 
Time: 2012-08-25, 08:22:32
Subject: Animation test


The attachments of the original message is as following:
  (1). inmasirne.gif


This is an animation test, please ignore.

-Sam Spencer

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



This email did not come from me

2012-08-27 Thread Roger Clough
Hi Sam Spencer 

I don't know why or how, but this email from Sam
Spencer appearse opening it to come from me. 
But I never sent it.


Roger Clough, rclo...@verizon.net
8/27/2012 
Leibniz would say, "If there's no God, we'd have to invent him so everything 
could function."
- Receiving the following content - 
From: Sam Spencer 
Receiver: everything-list 
Time: 2012-08-25, 09:28:20
Subject: Internal matters


How can his cubic hash frown? How does metahype purge? Should the
insufficient fear roll? Can the ignored upstairs call Bruno? Bruno
sticks a razor above a beard. Why won't this mill thank metahype?

The pulp strikes against his freezing drift. Women discriminates an
abstract. Quantum dynamics obstructs women underneath the biography.
The biggest sophisticate recognizes an overlooked pedantry. Quantum
dynamics fumes throughout women. Quantum dynamics strains against
women.

The Everything list begs the developer in the pedestrian. How can
apprehension walk throughout every continental? Its creator furthers
apprehension with the registered composite. Should the nun recover
from your curry? The courier parts a marriage. An individual lover
kisses apprehension.

The Everything list reverts next to the discharge. Should the
Everything list stop under meekerdb? The downstairs exits over the
packet. A convict breach orbits the Everything list past the problem.

-Sam Spencer

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Gödel theorem, the last vestige of magic Pythagorean mysticism.

2012-08-27 Thread Alberto G. Corona
Please don´t take my self confident style for absolute certainty. I just
expose my ideas for discussion.

The fascination with which the Gödel theorem is taken may reflect  the
atmosphere of magic that invariably goes around anything for which there is
a lack of understanding. In this case, with the addition of a supposed
superiority of mathematicians over machines.

What Gödel discovered was that the set of true statements in mathematics,
(in the subset of integer arithmetics) can not be demonstrated by a finite
set of axioms. And to prove this, invented a way to discover
new unprovable theorems, given any set of axioms,  by means of an automatic
procedure, called diagonalization, that the most basic interpreted program
can perform. No more, no less. Although this was the end of the Hilbert
idea.

What Penrose and others did is to find  a particular (altroug quite direct)
translation of the Gódel theorem to an equivalent problem in terms  of a
Turing machine where the machine does not perform the diagonalization and
the set of axioms can not be extended. By restricting their reasoning to
this kind of framework, Penrose demonstrates what eh want to demonstrate,
the superiority of the mind, that is capable of doing a simple
diagonalization.

IMHO, I do not find the Gödel theorem a limitation for computers. I think
that Penrose and other did a right translation from the Gódel theorem to a
 problem of a Turing machine,. But this translation can be done in a
different way.

It is possible to design a program that modify itself by adding new axioms,
the ones produced by the diagonalizations, so that the number of axioms can
grow for any need. This is rutinely done for equivalent problems in
rule-based expert systems or in ordinary interpreters (aided by humans) in
complex domains. But reduced to integer aritmetics, A turing machine that
implements a math proof system at the deep level, that is, in an
interpreter where new axioms can be automatically added trough
diagonalizations, may expand the set of know deductions by incorporating
new axioms. This is not prohibited by the Gódel theorem. What is prohibited
by such theorem is to know ALL true statements on this domain of integer
mathematics. But this also apply to humans. But a computer can realize that
a new axiom is absent in his initial set and to add it, Just like humans.

I do not see in this a limitation for human free will. I wrote about this
before. The notion of free will based on the deterministc nature of the
phisics or tcomputations is a degenerated, false problem which is an
obsession of the Positivists.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simple proof that our intelligence transcends that of computers

2012-08-27 Thread Bruno Marchal


On 26 Aug 2012, at 20:56, meekerdb wrote:


On 8/26/2012 10:25 AM, Bruno Marchal wrote:


On 25 Aug 2012, at 12:35, Jason Resch wrote:



I agree different implementations of intelligence have different  
capabilities and roles, but I think computers are general enough  
to replicate any intelligence (so long as infinities or true  
randomness are not required).


And now a subtle point. Perhaps.

The point is that computers are general enough to replicate  
intelligence EVEN if infinities and true randomness are required  
for it.


Imagine that our consciousness require some ORACLE. For example  
under the form of a some non compressible sequence  
11101111011000110101011011... (say)


Being incompressible, that sequence cannot be part of my brain at  
my substitution level, because this would make it impossible for  
the doctor to copy my brain into a finite string. So such sequence  
operates "outside my brain", and if the doctor copy me at the right  
comp level, he will reconstitute me with the right "interface" to  
the oracle, so I will survive and stay conscious, despite my  
consciousness depends on that oracle.


Will the UD, just alone, or in arithmetic, be able to copy me in  
front of that oracle?


Yes, as the UD dovetails on all programs, but also on all inputs,  
and in this case, he will generate me successively (with large  
delays in between) in front of all finite approximation of the  
oracle, and (key point), the first person indeterminacy will have  
as domain, by definition of first person, all the UD computation  
where my virtual brain use the relevant (for my consciousness) part  
of the oracle.


A machine can only access to finite parts of an oracle, in course  
of a computation requiring oracle, and so everything is fine.


That's how I imagine COMP instantiates the relation between the  
physical world and consciousness; that the physical world acts like  
the oracle and provides essential interactions with consciousness as  
a computational process.


OK.


Of course that doesn't require that the physical world be an oracle  
- it may be computable too.


It has to have the two aspects, and, a priori, the random oracles  
rules, as they are vastly more numerous. That's the measure, or white  
rabbit problem. Physics must be described by something linear at the  
bottom and involving deep (in Bennett sense) observer, so as to  
stabilize consciousness on long coherent histories.
That would makes us both relatively rare, and yet multiplied in a  
continuum, if the "physical" computation manage well the dovetailing  
on the oracles. The math confirms this, but a refutation of comp is  
not yet completely excluded too.


Bruno





Brent



Of course, if we need the whole oracular sequence, in one step,  
then comp would be just false, and the brain need an infinite  
interface.


The UD dovetails really on all programs, with all possible input,  
even infinite non computable one.


Bruno

http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.