Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-14 Thread Joachim Breitner
Hi,

Am Dienstag, den 12.02.2013, 17:47 -0500 schrieb Nehal Patel:
 To me, it seems that something like this should be possible -- am i
 being naive? does it already exist?  have people tried and given up?
 is it just around the corner?  can you help me make sense of all of
 this?

a related project is HOLCF-Prelude, by Christian Sternagel and others.
What they do is to reimplement the Haskell prelude as HOLCF data types
and functions in the theorem prover Isabelle. This allows you to write
functional code as HOLCF functions, prove properties of them (using
functions from the prelude and a library of lemmas about them) and, as
long as the definitions are the same as in your Haskell code, be happy
about it.

It is not a silver bullet, and not exactly what you need. For example,
you still have to manually translate between the HOLCF definition and
the Haskell code. Also, Haskell is not covered completely. Exceptions,
for example, are completely ignored.

But nevertheless it is useful: For example, we have started to verify
some of the transformations suggested by hlint, proved quite a few of
them and even found (very few) wrong ones.

And what wren said about the complexity of machine verified proofs is
very true. Especially when you also cover non-termination and lazyness
(as we do) and want to prove properties involving type classes with a
few assumptions about them as possible, and suddenly have the problem
that (==) can do arbitrary stuff.

You can find the code at https://sourceforge.net/p/holcf-prelude/
Comments and contributions are welcome!

Greetings,
Joachim

-- 
Joachim nomeata Breitner
Debian Developer
  nome...@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nome...@joachim-breitner.de | http://people.debian.org/~nomeata



signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread wren ng thornton

On 2/12/13 5:47 PM, Nehal Patel wrote:

And so my question is, that in 2013, why isn't this process a full fledged part 
of the language? I imagine I just don't know what I'm talking about, so correct 
me if I'm wrong, but this is how I understand the workflow used in practice 
with program derivation:  1) state the problem pedantically in code, 2) work 
out a bunch of proofs with pen and paper, 3) and then translate that back into 
code, perhaps leaving you with function_v1, function_v2, function_v3, etc   -- 
that seems fine for 1998, but why is it still being done this way?


I think there's a lot more complexity in (machine-verifiably) proving 
things than you realize. I've done a fair amount of programming in Coq 
and a bit in Twelf and Agda, and one of the things I've learned is that 
the kinds of proof we do on paper are rarely rigorous and are almost 
never spelled out in full detail. That is, afterall, not the point of a 
pen  paper proof. Pen  paper proofs are about convincing humans that 
some idea makes sense, and humans are both reasonable and gullible. 
Machine-verified proofs, on the other hand, are all about walking a 
naive computer through every possible contingency and explaining how and 
why things must be the case even in the worst possible world. There's no 
way to tell the compiler you know what I mean, or the other cases are 
similar, or left as an exercise for the reader. And it's not until 
trying to machine-formalize things that we realize how often we say 
things like that on paper. Computers are very bad at generalizing 
patterns and filling in the blanks; but they're very good at 
exhaustively enumerating contingencies. So convincing a computer is 
quite the opposite from convincing a human.



That said, I'm all for getting more theorem proving goodness into 
Haskell. I often lament the fact that there's no way to require proof 
that instances obey the laws of a type class, and that GHC's rewrite 
rules don't require any proof that the rule is semantics preserving. The 
big question is, with things as they are today, does it make more sense 
to take GHC in the direction of fully-fledged dependent types? or does 
it make more sense to work on integration with dedicated tools for 
proving things?


There's already some work towards integration. Things like Yices and SBV 
can be used to prove many things, though usually not so much about 
programs per se. Whereas Liquid Haskell[1] is working specifically 
toward automated proving of preconditions and postconditions.



[1] http://goto.ucsd.edu/~rjhala/liquid/haskell/blog/about/

--
Live well,
~wren

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Austin Seipp
On Tue, Feb 12, 2013 at 4:47 PM, Nehal Patel nehal.a...@gmail.com wrote:

 A few months ago I took the Haskell plunge, and all goes well...
 ... snip ...
 And so my question is, that in 2013, why isn't this process a full fledged 
 part of the language? I imagine I just don't know what I'm talking about, so 
 correct me if I'm wrong, but this is how I understand the workflow used in 
 practice with program derivation:  1) state the problem pedantically in code, 
 2) work out a bunch of proofs with pen and paper, 3) and then translate that 
 back into code, perhaps leaving you with function_v1, function_v2, 
 function_v3, etc   -- that seems fine for 1998, but why is it still being 
 done this way?

There is no one-stop shop for this sort of stuff. There are plenty of
ways to show equivalence properties for certain classes of programs.

I am unsure why you believe people tend to use pen and paper -
machines are actually pretty good at this stuff! For example, I have
been using Cryptol lately, which is a functional language from Galois
for cryptography. One feature it has is equivalence checking: you can
ask if two functions are provably equivalent. This is used in the
examples to prove that a naive Rijndael implementation is equivalent
to an optimized AES implementation.

You might be asking why you can't do this for Haskell. Well, you can -
kind of. There is a library called SBV which is very similar in spirit
to Cryptol but expressed as a Haskell DSL (and not really for Crypto
specifically.) All SBV programs are executable as Haskell programs -
but we can also compile them to C, and prove properties about them by
using an SMT solver. This is 'free' and requires no programmer
intervention beyond stating the property. So we can specify faster
implementations, and properties that show they're equivalent. SMT
solvers have a very low cost and are potentially very useful for
certain problems.

So there's a lot of good approaches to tackling these problems in
certain domains, some of them more automated than others. You can't
really provide constructive 'proofs' like you do in Agda. The language
just isn't meant for it, and isn't really expressive enough for it,
even when GHC features are heavily abused.

 What I'm asking about might sound related to theorem provers, -- but if so ,I 
 feel like what I'm thinking about is not so much the very difficult problem 
 of automated proofs or even proof assistants, but the somewhat simpler task 
 of proof verification. Concretely, here's a possibility of how I imagine   
 the workflow could be:

I believe you are actually precisely talking about theorem provers, or
at least this is the impression I get. The reason for that is the next
part:

 ++ in code, pedantically setup the problem.
 ++ in code, state a theorem, and prove it -- this would require a revision to 
 the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a 
 -structured- proof syntax that is easy for humans to construct and understand 
 -- in particular it would possible to easily reuse previous theorems and 
 leave out various steps.  At compile time, the compiler would check that the 
 proof was correct, or perhaps complain that it can't see how I went from step 
 3 to step 4, in which case I might have to add in another step (3b) to  help 
 the compiler verify the proof.
 ++ afterwards, I would use my new theorems to create semantically identical 
 variants of my original functions (again this process would be integrated 
 into the language)

Because people who write things in theorem provers reuse things! Yes,
many large programs in Agda and Coq for example use external libraries
of reusable proofs. An example of this is Agda's standard library for
a small case, or CoRN for Coq in the large. And the part about
'leaving things out' and filling them in if needed - well, both of
these provers have had implicit arguments for quite a while. But you
can't really make a proof out of axioms and letting the compiler
figure out everything. It's just not how those tools in particular
work.

I find it very difficult to see the difference between what you are
proposing and what people are doing now - it's often possible to give
proof of something regardless of the underlying object. You may prove
some law holds over a given structure for example, and then show some
other definitive 'thing' is classifed by that structure. An example is
a monoid, which has an associative binary operation. The /fact/ that
operation is associative is a law about a monoids' behavior.

And if you think about it, we have monoids in Haskell. And we expect
that to hold. And then actually, if you think about it, that's pretty
much the purpose of a type class in Haskell: to say some type abides
by a set of laws and properties regardless of its definitive
structure. And it's also like an interface in Java, sort of: things
which implement that basic interface must abide by *some* rules of
nature that make it so.

My point is, the 

Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Rustom Mody
On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel nehal.a...@gmail.com wrote:

 A few months ago I took the Haskell plunge, and all goes well... -- but I
 really want to understand the paradigms as fully as possible, and as it
 stands, I find myself with three or four questions for which I've yet to
 find suitable answers.  I've picked one to ask the cafe -- like my other
 questions, it's somewhat amorphous and ill posed -- so much thanks in
 advance for any thoughts and comments!

 
 Why isn't Program Derivation a first class citizen?
 ---

 (Hopefully the term program derivation is commonly understood?  I mean
 it in the sense usually used to describe the main motif of Bird's The
 Algebra of Programming.  Others seem to use it as well...)

 For me, it has come to mean solving problems in roughly the following way
 1) Defining the important functions and data types in a pedantic way so
 that the semantics are clear as possible to a human, but possibly
 inefficient (I use quotes because one of my other questions is about
 whether it is really possible to reason effectively about program
 performance in ghc/Haskell…)
 2) Work out some proofs of various properties of your functions and data
 types
 3) Use the results from step 2 to provide an alternate implementation with
 provably same semantics but possibly much better performance.

 To me it seems that so much of Haskell's design is dedicated to making
 steps 1-3 possible, and those steps represent for me (and I imagine many
 others) the thing that makes Haskell (and it cousins) so beautiful.

 And so my question is, that in 2013, why isn't this process a full fledged
 part of the language? I imagine I just don't know what I'm talking about,
 so correct me if I'm wrong, but this is how I understand the workflow used
 in practice with program derivation:  1) state the problem pedantically in
 code, 2) work out a bunch of proofs with pen and paper, 3) and then
 translate that back into code, perhaps leaving you with function_v1,
 function_v2, function_v3, etc   -- that seems fine for 1998, but why is it
 still being done this way?

 What I'm asking about might sound related to theorem provers, -- but if so
 ,I feel like what I'm thinking about is not so much the very difficult
 problem of automated proofs or even proof assistants, but the somewhat
 simpler task of proof verification. Concretely, here's a possibility of how
 I imagine   the workflow could be:

 ++ in code, pedantically setup the problem.
 ++ in code, state a theorem, and prove it -- this would require a revision
 to the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a
 -structured- proof syntax that is easy for humans to construct and
 understand -- in particular it would possible to easily reuse previous
 theorems and leave out various steps.  At compile time, the compiler would
 check that the proof was correct, or perhaps complain that it can't see how
 I went from step 3 to step 4, in which case I might have to add in another
 step (3b) to  help the compiler verify the proof.
 ++ afterwards, I would use my new theorems to create semantically
 identical variants of my original functions (again this process would be
 integrated into the language)

 While I feel like theorem provers offer some aspects of this workflow (in
 particular with the ability to export scala or haskell code when you are
 done), I feel that in practice it is only useful for modeling fairly
 technically things like protocols and crypto stuff -- whereas if this
 existed within haskell proper it would find much broader use and have
 greater impact.

 I haven't fully fleshed out all the various steps of what exactly it would
 mean to have program derivation be a first class citizen, but I'll pause
 here and followup if a conversation ensues.

 To me, it seems that something like this should be possible -- am i being
 naive? does it already exist?  have people tried and given up? is it just
 around the corner?  can you help me make sense of all of this?

 thanks! nehal



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



I am stating these things from somewhat foggy memory (dont have any
lambda-calculus texts handy) and so will welcome corrections from those who
know better…

In lambda-calculus, if reduction means beta-reduction, then equality is
decidable
Adding eta into the mix makes it undecidable
Haskell is basically a sugared beta-reducer
A prover must -- to be able to go anywhere -- be a beta-eta reducer.
Which is why a theorem-prover must be fundamentally more powerful and
correspondingly less decidable than an implemented programming language.

If a theorem-prover were as 'hands-free' as a programming language
Or if an implemented programming language could do the proving that a
theorem-prover could do, it would contradict the halting-problem/Rice
theorem.

-- 
http://www.the-magus.in

Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-13 Thread Jon Fairbairn
Rustom Mody rustompm...@gmail.com writes:

 On Wed, Feb 13, 2013 at 4:17 AM, Nehal Patel nehal.a...@gmail.com wrote:
 
 Why isn't Program Derivation a first class citizen?
 ---
 I am stating these things from somewhat foggy memory (dont have any
 lambda-calculus texts handy) and so will welcome corrections from those who
 know better…

 In lambda-calculus, if reduction means beta-reduction, then equality is

semi

 decidable

 If a theorem-prover were as 'hands-free' as a programming language
 Or if an implemented programming language could do the proving that a
 theorem-prover could do, it would contradict the halting-problem/Rice
 theorem.

Just so, but I’ve long (I suggested something of the sort to a
PhD student in about 1991 but he wasn’t interested) thought
that, since automatic programme transformation isn’t going to
work (for the reason you give), having manually written
programme tranformations as part of the code would be a useful
way of coding. RULES pragmas go a little way towards this, but
the idea would be that the language supply various known valid
transformation operators (akin to theorem proving such as in
HOL), and programmers would explicitly write transformation for
their functions that the compiler would apply.

-- 
Jón Fairbairn jon.fairba...@cl.cam.ac.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-12 Thread Nehal Patel
A few months ago I took the Haskell plunge, and all goes well... -- but I 
really want to understand the paradigms as fully as possible, and as it stands, 
I find myself with three or four questions for which I've yet to find suitable 
answers.  I've picked one to ask the cafe -- like my other questions, it's 
somewhat amorphous and ill posed -- so much thanks in advance for any thoughts 
and comments!


Why isn't Program Derivation a first class citizen?
---

(Hopefully the term program derivation is commonly understood?  I mean it in 
the sense usually used to describe the main motif of Bird's The Algebra of 
Programming.  Others seem to use it as well...) 

For me, it has come to mean solving problems in roughly the following way
1) Defining the important functions and data types in a pedantic way so that 
the semantics are clear as possible to a human, but possibly inefficient (I 
use quotes because one of my other questions is about whether it is really 
possible to reason effectively about program performance in ghc/Haskell…)
2) Work out some proofs of various properties of your functions and data types 
3) Use the results from step 2 to provide an alternate implementation with 
provably same semantics but possibly much better performance.

To me it seems that so much of Haskell's design is dedicated to making steps 
1-3 possible, and those steps represent for me (and I imagine many others) the 
thing that makes Haskell (and it cousins) so beautiful.

And so my question is, that in 2013, why isn't this process a full fledged part 
of the language? I imagine I just don't know what I'm talking about, so correct 
me if I'm wrong, but this is how I understand the workflow used in practice 
with program derivation:  1) state the problem pedantically in code, 2) work 
out a bunch of proofs with pen and paper, 3) and then translate that back into 
code, perhaps leaving you with function_v1, function_v2, function_v3, etc   -- 
that seems fine for 1998, but why is it still being done this way?

What I'm asking about might sound related to theorem provers, -- but if so ,I 
feel like what I'm thinking about is not so much the very difficult problem of 
automated proofs or even proof assistants, but the somewhat simpler task of 
proof verification. Concretely, here's a possibility of how I imagine   the 
workflow could be:

++ in code, pedantically setup the problem. 
++ in code, state a theorem, and prove it -- this would require a revision to 
the language (Haskell 201x) and perhaps look like Isabella's ISAR -- a 
-structured- proof syntax that is easy for humans to construct and understand 
-- in particular it would possible to easily reuse previous theorems and leave 
out various steps.  At compile time, the compiler would check that the proof 
was correct, or perhaps complain that it can't see how I went from step 3 to 
step 4, in which case I might have to add in another step (3b) to  help the 
compiler verify the proof.  
++ afterwards, I would use my new theorems to create semantically identical 
variants of my original functions (again this process would be integrated into 
the language)

While I feel like theorem provers offer some aspects of this workflow (in 
particular with the ability to export scala or haskell code when you are done), 
I feel that in practice it is only useful for modeling fairly technically 
things like protocols and crypto stuff -- whereas if this existed within 
haskell proper it would find much broader use and have greater impact.

I haven't fully fleshed out all the various steps of what exactly it would mean 
to have program derivation be a first class citizen, but I'll pause here and 
followup if a conversation ensues. 

To me, it seems that something like this should be possible -- am i being 
naive? does it already exist?  have people tried and given up? is it just 
around the corner?  can you help me make sense of all of this?

thanks! nehal
 
  
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why isn't Program Derivation a first class citizen?

2013-02-12 Thread Jan Stolarek
 To me, it seems that something like this should be possible -- am i being
 naive? does it already exist?  
During the compilation process GHC optimizes the code by performing successive 
transformations of 
the program. These transformations are known to preserve meaning of the program 
- they are based 
on some already proved facts and properties. I've written a blog post recently 
on this:

http://lambda.jstolarek.com/2013/01/taking-magic-out-of-ghc-or-tracing-compilation-by-transformation/

Now, you might be asking why does GHC transform the Core representation of a 
program and not the 
original Haskell code itself? The answer is simplicity. Core is equivalent to 
Haskell, but it's 
much easier to work with since there are few language constructs and it's 
easier to design 
transformations and prove them correct.

I hope this at least partially answers your question. Whether there automated 
ways of transforming 
Haskell programs into more efficient Haskell programs I don't know.

Janek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe