On 17 May 2017 at 19:49, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 5/17/2017 5:08 AM, David Nyman wrote:
>
>
> As a (very) rough and partial analogy, if I am on deck, and you are
> observing me from aloft, I can grasp that you are in a position to command
> an entire domain of such personally "unprovable" facts about me, despite my
> not being in a position​ to access them. Such personally unprovable facts
> might also bear directly on questions of my own consistency, for example if
> I were forced to rely on them in some crucial sense. Say you offered from
> your superior perspective to "be my eyes" in guiding my survival through
> some risky predicament below. I might choose to trust that guidance under
> hazard despite being in no position to prove independently the correctness
> of such a critical viewpoint. So in such a situation I might be unable to
> be unambiguously convinced of my own consistency but nonetheless choose to
> trust in it implicitly in order to promote my survival.
>
>
> I think this is a common but misleading way of thinking about
> incompleteness...that a super-theory, in which consistency of a sub-theory
> is provable, is more truthful and comprehends more knowledge.  The
> incompleteness has been proven not by finding some horizon beyond which
> vast new knowledge is found; it has beer proven by showing that some
> self-referential sentences cannot be proven on pain of inconsistency.
> Whether there are any interesting new theorems in the super-theory is a
> separate question.
>

​Yes, I agree that my admittedly hasty analogy is not entirely
satisfactory, so let's leave it lay for now. ​I also agree that the steps
by which more comprehensive theories subsume sub-theories with compensatory
axioms may strike one at first blush as relatively trivial. However this
may be misleading. For one thing there is of course an effective infinity
of such steps, such that at each level the consistency of all prior ones
can be assured. And for another, since the self-referential incompleteness
you mention is arguably a species of contradiction ("yes I am /no I'm not")
is there not thereby a danger that self-referential programs incorporating
such uncompensated inconsistencies might loop indefinitely or succumb to
radically unjustified inferences? If so the process of iteratively
correction is indispensable.

The general idea I guess in the computationalist framework is that this
iteratively corrected, expanding process ultimately spirals to some
critical conjunction of complexity and relative consistency, although by
definition the "topmost" effective level will never itself be provably
(completely) consistent. Hence, as I suggested above, if our consciousness
indeed supervenes on such a level we, then correspondingly cannot be
completely confident in our own consistency.

Now of course you will say all this is wishful thinking etc etc. But then
if beggars are to ride, wishes must eventually summon horses.

David

>
>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to