> > > Anyway, in intuitionistic logic it is natural indentify proofs and
> > > programs and proposition and types. Maybe best illustrated by the BHK
> > > (Brouwer-Heyting-Kolmogoroff) semantics:
> > >
> > > A proof of a /\ b is a proof of a and a proof of b, hence a /\ b =
(a,b)
> >
> > Sorry, are you going to say that a tuple (a,b) is the same as " a AND b
"

Not quite. He said that tuples and logical conjunctions are often identified
in some logics. Here's an explanation attempt from a non-specialist:

In some logics (keywords: intuitionistic, constructive), propositions are
not accepted as true unless they come with an explicit, constructive proof.

This has useful applications for typed programming languages. Read
types as claims of existence (e.g., "Bool" interpreted as a proposition
means "there are booleans", and in general, a type "t" when interpreted
as a proposition, means "there are objects of type t"). Now, the core of
constructive logics is that we do not accept such claims to be true unless
we are given an example, or at least a method by which such a concrete
example can be constructed.

For base types such as "Bool", this is quite simple (*), as we can give
some example objects, each of which is a proof of our claim

True :: Bool    -- read: "True" is a boolean (so there are booleans)
False :: Bool   -- read: "False" is a boolean

For complex types constructed from simpler types, we can give proofs
constructed from the simpler proofs that prove the existence claims for
the element types.

(Int,Bool) -- read: there are pairs of integers and booleans
(1,False)::(Int,Bool) -- read: "(1,False)" is such a pair

To proof the existence of pairs of integers and booleans, I have to
prove the existence of integers AND I have to prove the existence of
booleans. Also, I cannot simply say "'Int and Bool' is true because I've
seen proofs for 'Int' and for 'Bool' - trust me!". In constructive logic,
you won't believe me without the constructive proof. So I have to
package my two proofs for "Int" and for "Bool" into one pair of proofs.
And if I have a proof for the existence of pairs, I can extract from it
the proofs for its component types.

This is why pairs correspond to conjunctions in such a logic.

With these examples, Thorsten's brief summary (with the slight corrections
noted earlier) should make sense, I hope, but for completeness:

Either a b
-- read: there are objects of type a OR there are objects of type b
-- to prove this I can either give a (non-bottom) object/proof of type a
-- OR a (non-bottom) object/proof of type b,
-- so this corresponds to a disjunctive proposition

a -> b
-- read: provided that there are objects of type a,
-- there are objects of type b
-- to prove this, I have to give an object of type b,
-- but I may assume that I have an object of type a at hand
-- so this corresponds to an implication

> > In formal logic we say just this
> >
> >    a (true)
> >
> >   a => b  (true )
> > *********************
> > b (true)

Which happens to correspond to the typing rule for function application:

x :: a
f :: a -> b
----------
f x :: b

Given a proof for "as exist" and a proof for "if as exist, then bs exist as
well", we can construct a proof that "bs exist" by simply applying the proof
for the implication to the proof of its assumption; evaluating the function
applications corresponds to simplifying the proof (we hope).

> > > As an example how do we prove (a \/ b) /\ c -> (a /\ c) \/ (a /\ b)
> > > in Haskell ?
> > >
> > > lem :: (Either a b,c) -> Either (a,c) (b,c)
> > > lem (Left a,c) = Left (a,c)
> > > lem (Right b,c) = Right (b,c)
> >
> > Sorry I don't have to know all this , do you really think I am
> > illiterate in propositional logic or boolean algebra

The beauty of this correspondence (or isomorphism) is that you can use
what you already know (about logic) in a new context (types).

lem can be interpreted as a proof for the proposition corresponding to
its type (it constructs a proof for the right-hand side from a proof of
the left-hand side).

The main caveat is the insistence on constructive proofs (in the classical
logic most of us tend to learn, there is this favourite proof technique: if
I assume that xs don't exist, I am led to a contradiction, so xs have to
exist, even if I haven't seen any of them yet - this is not acceptable in
constructive logics).

[haven't read the papers on a correspondence for classical logic yet, but
I assume they exist, for otherwise I would contradict Frank Atanassow ;-]

> > > The disputed forall is just the (2nd order) quantification over all
> > > propositions.

forall a,b . (a, b) -> b
-- read: for all (types) a and for all (types) b, if (a proof exists that)
-- a(-typed objects)s exist and (a proof exists that) b(-typed objects)s
-- exist, then (a proof exists that) b(-typed object)s exist

forall a,b . a -> b -> b
-- read: for all a and for all b, if as exist, then (a proof exists that)
-- the existence of bs implies the existence of bs

forall a,b . (a -> b) -> b
-- read: for all a and for all b, if (a proof exists that) the existence of
-- as implies the existence of bs, then bs exist

forall b. (forall a. a -> b) -> b
-- read: for all b, if (a proof exists that) for all a, the existence of as
-- implies the existence of bs, then bs exist
-- [note the limited scope of the inner forall]

None of the statements that correspond to these types is true unless you
can give concrete non-bottom elements of the types (empty types correspond
to false propositions). So, "xs exist" should probably rather be "xs can be
constructed". In all of these cases, the proofs would be functions,
and the correspondence between types and propositions allows us to
derive some properties that such functions must have (*).

In the third case, the argument proof/function must not make any assumptions
about as, because it has to work on all as (the proof has to be supplied
before a specific a is given; the quantification over a is inside the
parameter proof). In the second case, the proof can be given after both a
and b have been given (the proof is inside of both quantifiers).

It is this limitation of potential function definitions by their required
type that is used in the context of runST.

runST :: forall a. (forall s. ST s a) -> a
-- read: for all a, if (a proof exists that),
-- for all s, objects of type ST s a exist,
-- then objects of type a exist

Again, there is a quantifier inside an inner proof/function. To construct an
object of type a, runST (as a proof for the non-emptyness of its type) can
make any assumption it needs about s, and so the inner proof can make
no assumptions about s. It is not necessary for ST s a to be a function, but
it should be a container, because the inner proof is runST's only source
of as.

To expand this a little, assume that ST s a was just a type of pairs, then

noRunST st@(sv,av) = (fst.last) [(av,True,"or not"),(av,sv,sv)]

would require that the type s of the value sv can be instantiated to both
Bool and String, but would ignore sv and return av, justifying the type

noRunST :: forall a. (forall s. (s,a)) -> a

noRunST is an overly lazy state transformer: as it ignores its state
parameter, it would happily accept any non-constructive proof for
the state component of the pair.

But in a world without bottom (*), we have little chance of constructing
the required proof parameter ourselves, and we have to use the primitives
supplied for the ST monad - they thread the state along, making sure that
only one state type is used in all operations of a given sequence, without
making any further assumptions about what that type may be.

A primitive operation

doSomeThing :: forall s,a. ST s a

is a proof that objects of the required type exist, which we can feed into
runST, but what about operations such as

get :: forall s. ST s (T s)
-- where T is some type constructor,
-- but not a type synonym that ignores s

Doesn't this allow the "hidden" state type to escape? Well, no: the
requirement in runST is that the proof parameter cannot make any
assumptions about s. In particular, we cannot assume that there is
any relation between s and a (as get does) in

runST :: forall a. (forall s. ST s a) -> a

So, this is why an "internal" use of the state type is okay (has a type)

runST (get >> return ()) :: ()

whereas an escaping reference to the "hidden" state type

runST get :: ???

is not okay (does not have a type in this system).

[I thought this was discussed extensively in the "State in Haskell"-paper,
but found, to my surprise, that the explanation was intermingled with
problem descriptions and details about the type checking process..]

> > and what I was saying (UP TO NOW NOBODY could show
> > where I was wrong) is the following:
> >
> > Hi, folks you can forget everything about this second order
quantification,
> > the usual forall does all .

You might want to have a look at Mark Jones' paper on "First-class
Polymorphism with Type Inference" (which should be available via his home
page, http://www.cse.ogi.edu/~mpj/). Section 2 discusses some problems of
eliminating nested quantifiers (replacing second order by first order
quantifiers). The conclusion is that not all ideas suggested by the
correspondence between types and propositions are directly applicable.

> > What I want to say is this :
> >
> >  put   forall s. ST s a   in a logical phrase . This is an invitation.

forall s. ST s a
-- read: for all (types) s, objects of type "ST s a" exist

> > Please people don't come with books  nor with
> > names to strengthen claims.
> > Talk logically about it , try to define very clearly what you mean.

One advantage of books is that the authors have spent an enormous
amount of time and effort to talk logically about it and to define very
clearly what they mean (another is that pointers to books and online
documents do not overload people's mailboxes..).

If you have read this far: all of this and more can be found in the
references given (some of which might even be available online?).
Names are no proof, but can help to find the information related
to a specific question in a book that discusses a broader topic,
and can serve as a pointer with which many people associate this
related information.

Hth,
Claus

(*) with the caveat required for Haskell that we only deal with non-bottom
elements: non-termination and abnormal termination are elements of every
type in Haskell - if we would accept them as constructive proofs, every
proposition had a proof!

PS. There has been a trend towards very long emails recently. As you can
see, I am prone to follow this bad habit as well, and so I feel inclined to
advise my fellow sinners: the longer the message, the less likely that
anyone reads it to the e



Reply via email to