On Mon, May 6, 2019 at 2:44 AM Bruce Kellett <bhkellet...@gmail.com> wrote:

> On Mon, May 6, 2019 at 4:41 PM Jason Resch <jasonre...@gmail.com> wrote:
>
>> On Mon, May 6, 2019 at 1:19 AM Bruce Kellett <bhkellet...@gmail.com>
>> wrote:
>>
>>>
>>> This is essentially the point that both Turing and Goedel made when they
>>> pointed out that human consciousness is not Turing emulable -- it involves
>>> intuitive leaps that are not algorithmic, presumable coming from an
>>> uncodable environment.
>>>
>>
>> Could you provide citations to Turing and Godel's thoughts on this?  In
>> my view Turing was the founder of functionalism/computationalism, when in
>> his 1950 paper "Computing Machinery and Intelligence" he wrote:
>>
>>
>> “The fact that Babbage's Analytical Engine
>> was to be entirely mechanical will help us rid ourselves of a
>> superstition. Importance is often
>> attached to the fact that modern digital computers are electrical, and
>> the nervous system is also
>> electrical. Since Babbage's machine was not electrical, and since all
>> digital computers are in a sense
>> equivalent, we see that this use of electricity cannot be of theoretical
>> importance. [...] If we wish to
>> find such similarities we should look rather for mathematical analogies
>> of function.”
>>
>>
>> As for Godel, while I am aware of instances where his ideas have been
>> misapplied by some philosophers to argue that human consciousness is not
>> Turing emulable, I am not aware of any writings of Godel where he expressed
>> such ideas. It is hard for me to believe Godel himself misunderstood his
>> own ideas to the extent necessary to believe human mathematicians somehow
>> immune to its implications.  According to Godel's 14 points (his own
>> personal philosophy) it suggests he sees nothing special about the material
>> composition, and he also believes all problems (including art) can be
>> addressed through systematic methods. This suggests to me he would be a
>> proponent of at least "weak AI", which again is sufficient for my thought
>> experiment.
>>
>> 1. The world is rational.
>> 2. Human reason can, in principle, be developed more highly (through
>> certain techniques).
>> *3. There are systematic methods for the solution of all problems (also
>> art, etc.).*
>> *4. There are other worlds and rational beings of a different and higher
>> kind.*
>> 5. The world in which we live is not the only one in which we shall live
>> or have lived.
>> 6. There is incomparably more knowable a priori than is currently known.
>> 7. The development of human thought since the Renaissance is thoroughly
>> intelligible (durchaus einsichtige).
>> 8. Reason in mankind will be developed in every direction.
>> 9. Formal rights comprise a real science.
>> *10. Materialism is false.*
>> *11. The higher beings are connected to the others by analogy, not by
>> composition.*
>> 12. Concepts have an objective existence.
>> 13. There is a scientific (exact) philosophy and theology, which deals
>> with concepts of the highest abstractness; and this is also most highly
>> fruitful for science.
>> 14. Religions are, for the most part, bad– but religion is not.
>>
>>
>> (Emphasis mine)
>>
>> Jason
>>
>
> I base these comments on an analysis in a paper by Copeland and Shagrir,
> in the book "Computability: Turing, Goedel, Church, and Beyond" (MIT Press,
> 2015). The main argument is that "In about 1970, Goedel wrote a brief note
> entitled 'A Philosophical Error in Turing's Work' (1972; in Goedel's
> Collected Works)." "In the postscript, Goedel also raised the intriguing
> 'question of whether there exist finite non-mechanical procedures'; and he
> observed that the generalised incompleteness results 'do not establish any
> bounds for the powers of human reason, but rather for the potentialities of
> pure formalism in mathematics."
>
> "A philosophical error in Turing's work. Turing in [section 9 of "On
> Computable Numbers" (1936, 75-76)} gives an argument which is supposed to
> show that mental procedures cannot go beyond mechanical procedures. However
> ... what Turing disregards completely is the fact that mind, in its use, is
> not static, but constantly developing ... Although at each stage the number
> and precision of the abstract terms at our disposal may be finite, both
> (and, therefore, also Turing's number of distinguishable states of mind)
> may converge toward infinity in the course of the application of the
> procedure. (Geode 1972, 306)."
>
> Further: "What Turing disregards completely is the fact that mind, in its
> use, is not static, but constantly developing. This is seen, e.g., from the
> infinite series of ever stronger axioms of infinity in set theory, each of
> which expresses a new idea or insight ... Therefore, although at each stage
> of the mind's development the number of possible states is finite, there is
> no reason why this number should not converge to infinity in the course of
> its development. (Godel in Wang 1974, 325)."
>
> The article by Copeland and Shagrir then goes on to defend Turing against
> Goedel's criticism, by pointing out that Turing actually says "Having
> defined a certain infinite binary sequence \delta, which he shows to be
> uncomputable, Turing says: "It is (so far as we know at present) possible
> that any assigned number of figures of \delta can be calculated, but not by
> a uniform process. When sufficiently many figures of \delta have been
> calculated, an essentially new method is necessary in order to obtain more
> figures". This sequence of essentially new methods is, itself, uncomputable.
>
> In Turing's view, the activity of what he called the faculty of intuition
> brings it about that mathematical judgments exceed what can be expressed by
> means of a single formal system.
>
> I recommend going to the original Copeland and Shagrir paper for more
> detail.
>
>
Bruce,

Thank you for the reference it was an interesting read, and a treasure
trove of interesting quotations.  I include some below for others.

My overall impression from the reading is that Godel's position vacillated
a bit and in the end his final thoughts on the matter were not entirely
clear.  Turing's position seemed to vary less and was more rooted on the
side that a finite machine could replicate the behavior of any given human
mathematician.

=========================


When I first published my paper about undecidable propositions the result
could not be pronounced in this generality, because for the notions of
mechanical procedure and of formal system no mathematically satisfactory
definition had been given at that time. This gap has since been filled by
Herbrand, Church and Turing. (Gödel 1935, 166)


So, just a few years after having rejected Church’s proposal, Gödel
embraced it, attributing the “mathematically satisfactory definition” of
computability to Herbrand, Church, and Turing. Why did Gödel change his
mind? Turing’s work was clearly a significant factor. Initially, Gödel
mentions Turing together with Herbrand and Church, but a few pages later he
refers to Turing’s work alone as having demonstrated the correctness of the
various equivalent mathematical definitions:

 “[t]hat this really is the correct definition of mechanical computability
was established beyond any doubt by Turing,” he wrote (193?, 168). More
specifically: [Turing] has shown that the computable functions defined in
this way are exactly those for which you can construct a machine with a
finite number of parts which will do the following thing. If you write down
any number n1, . . ., nr on a slip of paper and put the slip into the
machine and turn the crank, then after a finite number of turns the machine
will stop and the value of the function for the argument n1, . . ., nr will
be printed on the paper. (193?, 168).



 In the Wang period (1967– 197631), Gödel’s discussions of the implications
of these results were notably openminded and cautious. For example, he said
(in conversation with Wang):

 “The incompleteness results do not rule out the possibility that there is
a theorem-proving computer which is in fact equivalent to mathematical
intuition” (Wang 1996, 186). On “the basis of what has been proved so far,”
Gödel said, “it remains possible that there may exist (and even be
empirically discoverable) a theorem-proving machine which in fact is
equivalent to mathematical intuition, but cannot be proved to be so, nor
even be proved to yield only correct theorems of finitary number theory”
(184–85).


However, the textual evidence indicates clearly that Gödel’s position
changed dramatically over time. In 1939 his answer to the question “Is the
human mind a machine?” is a bold “No.” By 1951, his discussion of the
relevant issues is nuanced and cautious. His position in his Gibbs lecture
of that year seems to be that the answer to the question is not known. By
1956, however, he entertains—somewhat guardedly and with qualifications—the
view that the

“thinking of a mathematician in the case of yes-or-no questions could be
completely replaced by machines” (1956, 375).


In later life, his position appears to have moved once again in the
direction of his earlier views (although the evidence from this period is
less clear).

[I]f the human mind were equivalent to a finite machine, then objective
mathematics not only would be incompletable in the sense of not being
contained in any well-defined axiomatic system, but moreover there would
exist absolutely unsolvable diophantine problems . . . where the epithet
“absolutely” means that they would be undecidable, not just within some
particular axiomatic system, but by any mathematical proof the human mind
can conceive. So the following disjunctive conclusion is inevitable: Either
. . . the human mind . . . infinitely surpasses the powers of any finite
machine [*], or else there exist absolutely unsolvable diophantine problems
[**]. (Gödel 1951, 310; emphasis in original)


Concerning alternative [*], Gödel says only that“It is not known whether
the first alternative holds” (312). He also says,

“It is conceivable (although far outside the limits of present-day science)
that brain physiology would advance so far that it would be known with
empirical certainty . . . that the brain suffices for the explanation of
all mental phenomena and is a machine in the sense of Turing” (309, note
13).


Gödel takes alternative [**], asserted under the hypothesis that the human
mind is equivalent to a finite machine, very seriously and uses it as the
basis of an extended argument for mathematical Platonism.

However, Wang reported that in 1972, in comments at a meeting to honor von
Neumann, Gödel said:

“The brain is a computing machine connected with a spirit” (Wang 1996,
189).


In discussion with Wang at about that time, Gödel amplified this remark:

Even if the finite brain cannot store an infinite amount of information,
the spirit may be able to. The brain is a computing machine connected with
a spirit. If the brain is taken to be physical and as a digital computer,
from quantum mechanics there are then only a finite number of states. Only
by connecting it to a spirit might it work in some other way. (Gödel in
Wang 1996, 193)


Turing believed that the mathematical objection has no force at all as an
objection to machine intelligence—but not because the objection is
necessarily mistaken in its claim that what the mind does is not always
computable. He gave this pithy statement of the mathematical objection in
his 1947 lecture:

[W]ith certain logical systems there can be no machine which will
distinguish provable formulae of the system from unprovable . . . On the
other hand if a mathematician is confronted with such a problem he would
search around and find new methods of proof, so that he ought eventually to
be able to reach a decision about any given formula. (393–94)


As we showed in section 1.1, this idea—that the devising of new methods is
a nonmechanical aspect of mathematics—is found in Turing’s logical work
from an early stage. He also mentions the idea in another of his wartime
letters to Newman:

The straightforward unsolvability or incompleteness results about systems
of logic amount to this
α) One cannot expect to be able to solve the Entscheidungsproblem for a
system
β) One cannot expect that a system will cover all possible methods of
proof. (Turing to Newman, ca. 1940a, 212)


Here Turing is putting an interesting spin on the incompleteness results,
which are usually stated in terms of there being true mathematical
statements that are not provable. On Turing’s way of looking at matters,
the incompleteness results show that no single system of logic can include
all methods of proof; and he advocates a progression of logical systems—his
ordinal logics—each more inclusive than its predecessors. He continued in
the letter:

[W]e. . . make proofs . . . by hitting on one and then checking up to see
that it is right. . . . When one takes β) into account one has to admit
that not one but many methods of checking up are needed. In writing about
ordinal logics I had this kind of idea in mind. (212–13)


In short, then, there might be men cleverer than any given machine, but
then again there might be other machines cleverer again, and so on. (Turing
1950, 451)


Turing’s discussions of learning repeatedly emphasized:
 • The importance of search: he hypothesized boldly that “intellectual
activity consists mainly of various kinds of search.” (1948, 431)
• The importance of the learner making and correcting mistakes: “[T]his
danger of the mathematician making mistakes is an unavoidable corollary of
his power of sometimes hitting upon an entirely new method.” (ca. 1951,
472)
• The importance of involving a random element: “[O]ne feature that . . .
should be incorporated . . . is a “random element.” . . . This would result
in the behavior of the machine not being by any means completely determined
by the experiences to which it was subjected.” (ca. 1951, 475)
• The importance of instruction modification: “What we want is a machine
that can learn from experience. The possibility of letting the machine
alter its own instructions provides the mechanism for this. . . . One can
imagine that after the machine had been operating for some time, the
instructions would have altered out of all recognition.” (1947, 393)

As I see it, you can have a program that enumerates provable statements
under some axiomatic system. But you can't automate its ability to expand
its provability power (e.g. by adding new axioms for which it has collected
empirical evidence) without accepting some non-zero chance that it will
enumerate a false proposition.  In my view, this is no different than the
present state of human mathematicians.  Intuition is just empiricism
developed through experience with formal systems.  If you are willing to
tolerate occasional errors or missteps, it is easy to create a program that
incorporates search, learning, instruction modification, etc.  Even
randomness can be achieved via pseudorandom generators, or forking the
process to select both possible bit values for each random bit required, or
or through a connection to a hardware RNG.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to