Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-10 Thread Benjamin Redelings

On 10/08/2011 12:46 PM, Jan-Willem Maessen wrote:

On Fri, Oct 7, 2011 at 2:46 PM, Brandon Moorebrandon_m_mo...@yahoo.com  wrote:

Margnus Carlsson did something monadic several years ago.

http://dl.acm.org/citation.cfm?doid=581478.581482

Perhaps there is an implementation on Hackage or on his website.

This stuff also goes by the moniker adaptive computation. See the
references and citations of that paper for more on this.

Umut Acar now seems to refer to this as self-adjusting computation,
and has some work here:

http://umut.mpi-sws.org/self-adjusting-computation

In particular, there seems to be a modified version of Mlton.

To tie things together a bit, Magnus Carlsson's paper was based on
Umut Acar's earlier work.  Note in particular that there's a lot of
emphasis placed on efficiently figuring out what computation needs to
be re-done (and some theory to support those efficiency claims).  FRP
frameworks, etc. naively re-do rather too much computation (all of it,
in particularly poor cases) compared to systems specifically tailored
to self-adjustment.

-Jan-Willem Maessen


1. Thank you!  This is the kind of thing I was looking for.  It
(a) uses compiler/interpreter infrastructure (not explicit programmer 
directives) to
(b) construct a dynamic dependency graph that reflects the structure of 
the computation.
I am curious if anyone has constructed a modified STG machine (which 
doesn't modify running code, I guess?) or alternatively a graph-based 
machine like Sestoft's mark 1 machine (that actually modifies running 
code) that would track dependencies.  That would be call-by-need instead 
of call-by-value.


(Just for clarification, I am interested in calculations where the 
individual operations (e.g. analogous to '+') are extremely slow and are 
currently written in C++.  Therefore, a certain amount of overhead seems 
tolerable.  )


2. Another question would be: most of the self-adjusting computation 
frameworks seem to assume that we always throw away the old 
computation.  For function optimization (or, alternatively, Markov chain 
Monte Carlo random walks) we keep either the old or the new computation 
based on the result of the new computation.  Thus, we would not like to 
simply over-write the old computation when we do the new computation.  
This could mean splitting (part of) the dynamic dependency graph, which 
might incur a lot of memory allocation overhead...


-BenRI

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-08 Thread Heinrich Apfelmus

David Barbour wrote:

Heinrich Apfelmus wrote:

Even then, events and behaviors are one abstraction level too low. In my
opinion, you are better off with a library/approach geared directly towards
incremental computations.


I believe behaviors are precisely the 'right' abstraction if the goal is to
model variables in a computation changing over time. But I agree that, if
the libraries aren't taking advantage of the implicit optimization
opportunities, you are better off finding libraries that do.


I agree that behaviors are the right abstraction if you only want to 
know what result is being calculated, not how it is going to be calculated.


Unfortunately, the how is important for incremental computations 
because there is no single canonical way to implement efficient updates. 
It often depends on which functions are expensive, which can actually be 
decoupled, and so on. I don't know any good abstraction for specifying 
the how; I do know that FRP doesn't help much with that.



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-08 Thread Jan-Willem Maessen
On Fri, Oct 7, 2011 at 2:46 PM, Brandon Moore brandon_m_mo...@yahoo.com wrote:
 Margnus Carlsson did something monadic several years ago.

 http://dl.acm.org/citation.cfm?doid=581478.581482

 Perhaps there is an implementation on Hackage or on his website.

 This stuff also goes by the moniker adaptive computation. See the
 references and citations of that paper for more on this.

 Umut Acar now seems to refer to this as self-adjusting computation,
 and has some work here:

 http://umut.mpi-sws.org/self-adjusting-computation

 In particular, there seems to be a modified version of Mlton.

To tie things together a bit, Magnus Carlsson's paper was based on
Umut Acar's earlier work.  Note in particular that there's a lot of
emphasis placed on efficiently figuring out what computation needs to
be re-done (and some theory to support those efficiency claims).  FRP
frameworks, etc. naively re-do rather too much computation (all of it,
in particularly poor cases) compared to systems specifically tailored
to self-adjustment.

-Jan-Willem Maessen

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-07 Thread David Barbour
Functional Reactive Programming can model this sort of 'change over time'
incremental computation, but I doubt you'd get a performance benefit from it
unless your operations are considerably more expensive than '+' on numbers.
Look into the 'Reactive' library, and Conal Elliott's paper on it (Push-Pull
FRP).

On Thu, Oct 6, 2011 at 2:55 PM, Benjamin Redelings I 
benjamin.redeli...@duke.edu wrote:

 Hi all,

I'm not sure this is the right forum for this question.  If so,
 please let me know where else might be more appropriate.  My question
 is, roughly, is there already an existing framework for incremental
 evaluation in Haskell?  That is, if I write a program using Haskell, and
 then change a small part of this program, can the modified program
 re-use any results which are calculated by the first, unmodified
 program?  This would be really useful for CPU-intensive statistical
 software and other scientific computing.

For example, if I have the expression (x+y)+z, where x=1, y=2, and
 z=4, then I need not recalculate (x+y) when z changes.  However, (x+y)+z
 must still be recalculated.  This is useful in speeding up statistical
 software that optimizes a complex function of many arguments, when only
 one argument changes at a time.  It is also useful for Markov chain
 Monte Carlo (MCMC) since usually one changes only one argument at a time
 there as well.

I haven't seen much work on this using the lambda calculus for this
 since JH Field's Ph.D. Thesis Incremental Reduction in the Lambda
 Calculus.  There are a number of programs that represent the
 calculation as a static DAG (directed, acyclic graph), but this can't
 handle functions very well.  I'm currently trying to write an
 interpreter that could do this correctly, but I don't want to reinvent
 the wheel.  Can anyone point me to where I could find more information
 about how to do this in a Haskell framework?

 thanks!
 -BenRI
 --
 Benjamin Redelings


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-07 Thread Sean Leather
Hi Benjamin,

My question is, roughly, is there already an existing framework for
 incremental evaluation in Haskell?


We at Utrecht have done some work on this:
http://people.cs.uu.nl/andres/Incrementalization/

Simply put, if your computation is a fold/catamorphism, then you can easily
take advantage of a generic framework for incremental evaluation.

We haven't actually turned this into a library, but if you're interested, we
can discuss doing that.

Regards,
Sean
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-07 Thread Heinrich Apfelmus

David Barbour wrote:

Benjamin Redelings wrote:

My question is, roughly, is there already an existing framework for
incremental evaluation in Haskell?


Functional Reactive Programming can model this sort of 'change over
time' incremental computation, but I doubt you'd get a performance
benefit from it unless your operations are considerably more
expensive than '+' on numbers. Look into the 'Reactive' library, and
Conal Elliott's paper on it (Push-Pull FRP).


I'm currently developing a library for functional reactive 
programming[1] and I've thought a bit about incremental evaluation and 
FRP. Here my conclusions:


FRP is somewhat orthogonal to incremental computation because FRP is 
focusing on expressiveness while incremental computation focuses on 
performance. You can formulate some incremental algorithms in terms of 
FRP, but you need special support from the implementation to make it 
efficient. I do know that reactive-banana can handle some cases, but I 
have my doubts whether Conal's Reactive library can (afaik, nobody has 
tried to reason about its resource usage explicitly). Even then, events 
and behaviors are one abstraction level too low. In my opinion, you 
are better off with a library/approach geared directly towards 
incremental computations.



  [1]: http://haskell.org/haskellwiki/Reactive-banana

Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-07 Thread David Barbour
On Fri, Oct 7, 2011 at 3:17 AM, Heinrich Apfelmus apfel...@quantentunnel.de
 wrote:

 FRP is somewhat orthogonal to incremental computation because FRP is
 focusing on expressiveness while incremental computation focuses on
 performance. You can formulate some incremental algorithms in terms of FRP,
 but you need special support from the implementation to make it efficient.


Granted. The 'Reactive' library provides some special support for composing
constant behavior values, but I share your doubts about its suitability for
high-performance incremental computation. By my understanding from Push-Pull
FRP, it should scale poorly in terms of linear 'checks' of promises when
trying to find which variables have changed. If you have 1000 variables you
still might need to probe up to 1000 promises (or more, if you use vars more
than once) to see their next time-of-change.

My asynchronous arrows and reactive demand programming model is well suited
for incremental computation, having been designed for scalability and
distributed computation (breaking it into multiple pipelines with minimal
synchronization interactions at zip and merge). But it currently lacks a
complete implementation.



 Even then, events and behaviors are one abstraction level too low. In my
 opinion, you are better off with a library/approach geared directly towards
 incremental computations.


I believe behaviors are precisely the 'right' abstraction if the goal is to
model variables in a computation changing over time. But I agree that, if
the libraries aren't taking advantage of the implicit optimization
opportunities, you are better off finding libraries that do.

Regards,

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-07 Thread Brandon Moore
 From: Peter Gammie pete...@gmail.com Oct 6. 2011 6:58 PM

 
 Ben,
 
 On 07/10/2011, at 8:55 AM, Benjamin Redelings I wrote:
 
  My question is, roughly, is there already an existing framework for 
 incremental
  evaluation in Haskell?
 
 Margnus Carlsson did something monadic several years ago.
 
 http://dl.acm.org/citation.cfm?doid=581478.581482
 
 Perhaps there is an implementation on Hackage or on his website.
 
 This stuff also goes by the moniker adaptive computation. See the 
 references and citations of that paper for more on this.

Umut Acar now seems to refer to this as self-adjusting computation,
and has some work here:

http://umut.mpi-sws.org/self-adjusting-computation

In particular, there seems to be a modified version of Mlton.

Brandon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-06 Thread Benjamin Redelings I

Hi all,

I'm not sure this is the right forum for this question.  If so,
please let me know where else might be more appropriate.  My question
is, roughly, is there already an existing framework for incremental
evaluation in Haskell?  That is, if I write a program using Haskell, and
then change a small part of this program, can the modified program
re-use any results which are calculated by the first, unmodified
program?  This would be really useful for CPU-intensive statistical
software and other scientific computing.

For example, if I have the expression (x+y)+z, where x=1, y=2, and
z=4, then I need not recalculate (x+y) when z changes.  However, (x+y)+z
must still be recalculated.  This is useful in speeding up statistical
software that optimizes a complex function of many arguments, when only
one argument changes at a time.  It is also useful for Markov chain
Monte Carlo (MCMC) since usually one changes only one argument at a time
there as well.

I haven't seen much work on this using the lambda calculus for this
since JH Field's Ph.D. Thesis Incremental Reduction in the Lambda
Calculus.  There are a number of programs that represent the
calculation as a static DAG (directed, acyclic graph), but this can't
handle functions very well.  I'm currently trying to write an
interpreter that could do this correctly, but I don't want to reinvent
the wheel.  Can anyone point me to where I could find more information
about how to do this in a Haskell framework?

thanks!
-BenRI
--
Benjamin Redelings


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question: Lazy Incremental Evaluation and Haskell?

2011-10-06 Thread Peter Gammie
Ben,

On 07/10/2011, at 8:55 AM, Benjamin Redelings I wrote:

 My question is, roughly, is there already an existing framework for 
 incremental
 evaluation in Haskell?

Margnus Carlsson did something monadic several years ago.

http://dl.acm.org/citation.cfm?doid=581478.581482

Perhaps there is an implementation on Hackage or on his website.

This stuff also goes by the moniker adaptive computation. See the references 
and citations of that paper for more on this.

cheers
peter

-- 
http://peteg.org/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe