Re: ideas for compiler project
On Sat, 16 Feb 2002, Dylan Thurston wrote: > On Thu, Jan 24, 2002 at 03:38:59PM +0100, Bjorn Lisper wrote: > > I think MATLAB's matrix language provides about the right level of > > abstraction for a high-level matrix language. You can for instance write > > things like > > > > Y = inv(A)*B > > > > to assign to Y the solution of Ax = B. ... > > Just a comment on a long post... I am personally found of MetaFont's > approach, where you write > > Ax = B > > to find the solution to Ax = B. When working with transformations and > such, being able to write all your equations "forwards" makes it much > easier to keep everything straight; plus, if you have several equations > for a variable, you don't have to figure out how to gather them > together. Can anyone see a way to implement something like this in > Haskell? Or is it better to make a small interpreted language? > > Best, > Dylan Thurston > why not write some software that does something like let y = ((Matrix A) :*: (Vector X)) := (Matrix B)) data MatrixExp = ... data Sym = A | B | C ... data Unknown = X | Y ... solve :: MatrixExp -> Maybe (Vector Sym) ... ? Jay Cox ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: ideas for compiler project
On Thu, Jan 24, 2002 at 03:38:59PM +0100, Bjorn Lisper wrote: > I think MATLAB's matrix language provides about the right level of > abstraction for a high-level matrix language. You can for instance write > things like > > Y = inv(A)*B > > to assign to Y the solution of Ax = B. ... Just a comment on a long post... I am personally found of MetaFont's approach, where you write Ax = B to find the solution to Ax = B. When working with transformations and such, being able to write all your equations "forwards" makes it much easier to keep everything straight; plus, if you have several equations for a variable, you don't have to figure out how to gather them together. Can anyone see a way to implement something like this in Haskell? Or is it better to make a small interpreted language? Best, Dylan Thurston msg10237/pgp0.pgp Description: PGP signature
Re: ideas for compiler project
Jerzy Karczmarczuk writes: > Steven Bevan wrote interesting numeric routines a long time ago. Actually I did little more than transliterate the algorithm descriptions I found in a book on numerical analysis. I know next to nothing about numerical analysis so I have no idea if they are interesting or useful. Implementing them was a pleasant diversion from what I was supposed to be doing at the time :-) ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: ideas for compiler project
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Friday 25 January 2002 13:25, Jerzy Karczmarczuk wrote: > Block recursive Schemes in Matlab are easier than in C++. Implementing > pyramid algorithms is not difficult. Slicing, reshaping, cloning, etc. > of matrices are very powerful tools, but they are so imperative, that > it is not easy to see how to replace them with something "functionally > purified". > I think there is a very simple answer to all this. I'd considered the same thing; making haskell a front end for numerical/optimizations/etc codes. I now recall papers about compiling dense/sparse matrix codes in an architecture independent way. To summarize: it's better if the system knows about algebra. However, I don't think that it's feasible to write a haskell library that does it, or extend haskell such that it becomes "linear algebra" aware. I suppose the right direction is to write a compiler/interpreter for a linear algebra/numerical language in Haskell! That language can be made very mathematical, and still much more capable and efficient than matlab. Otherwise all you're going to have is another matlab clone. The hard part here is of course the design of this specific language... Nevertheless, writing a matlab clone is haskell would be fun as well! It could be more extensible and reliable than matlab itself surely. Thanks, - -- Eray Ozkural (exa) <[EMAIL PROTECTED]> Comp. Sci. Dept., Bilkent University, Ankara www: http://www.cs.bilkent.edu.tr/~erayo GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C -BEGIN PGP SIGNATURE- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE8WZ+7fAeuFodNU5wRAgKdAKCKUqnksk1FWsxbVDSlQ7fyN8mzZwCaA565 v32+EvLsEzJIVF4tBovP0V0= =PAC5 -END PGP SIGNATURE- ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: ideas for compiler project
Simon Peyton-Jones: > Lots of people have observed that Haskell might be a good "scripting > language" for numerical computation. In complicated numerical > applications, the program may spend most of its time in (say) matrix > multiply, which constitutes a tiny fraction of the code for the > application. So write the bulk of the application in Haskell (where the > logic is complex but the performance doesn't matter) and then link to a > C or Fortran library to do the small part that is really compute > intensive. ... > You'd need to find a "real" application though. The classical "kernels" > (matrix multiply, inversion etc) are precisely the things you may not > want to do in Haskell. That's it. With one "grain of salt". It happened to me that I wanted to write some structurally trivial routines for matrix inversion, iterators for ODEs, etc., but I wanted them *polymorphic*. (And, sometimes, lazy: manipulators of power series, asymptotic expansions, some automatic dif- ferentiation stuff, etc.) Then Haskell was a decent tool, and - as Björn Lisper remarks, the hindrance was the lack of integrated tools. Writing a triple loop to invert a matrix instead of having something like `recip` defined within the field of square matrices, and implemented with some efficiency considerations in mind, is a bit clumsy. Björn quotes and comments: > >The classical "kernels" [...] you may not want to do in Haskell. > > With the current compiler technology for Haskell, one would add. I don't > think it would be impossible to compile such Haskell programs into efficient > code. Functional languages for matrix/array computing was a quite active > research area 10-15 years ago, with efforts like Sisal and Id. These > languages were strict and first order, but you can write such programs in > Haskell. I think it would be possible to have a Haskell compiler that could > manage a subset of Haskell matrix programs quite well. Steven Bevan wrote interesting numeric routines a long time ago. Thorsten Zoerner wrote a Clean package Class with several Lin. Algebra and some "slicing" utilities, which emulated the "vectorized" approach of Matlab. This can be in its greater part translated into Haskell. --- But, please, some criticisms of Matlab are weakly justified. BL says: > Also, MATLAB is very ill-suited to > expressing block-recursive matrix algorithms, which are becoming > increasingly important in numerical computing. And, of course, there is no > decent type system, no higher order functions, etc... Block recursive Schemes in Matlab are easier than in C++. Implementing pyramid algorithms is not difficult. Slicing, reshaping, cloning, etc. of matrices are very powerful tools, but they are so imperative, that it is not easy to see how to replace them with something "functionally purified". The Matlab type system is dynamic and "indecent", but you have objects and inheritance, and you *HAVE* higher-order functions as well. All the Matlab GUI tools, very powerful and reconfigurable are based on objects, which accept callbacks as parameters. This is not so clean as we would like to have, but pragmatically OK. What bothers me a bit in our Haskell world is the fact that the efforts are atomized. People work on GUIs, and don't care about drawing/painting routines. Those who care, are often far away from the numerical world. The numerically-oriented folk often disregard with a lot of desinvolture the attempts to put the Haskell numerical classes in an abstract algebraic framework (really useful from the point of view of code reusing), etc. I have the impression that this is changing, but slowly, while the scientific computation/visualization world is marching very fast, not only in the direction of very-fast-even-more-dirty routines, but also in the direction of new conceptualization/representation models (like, e.g., "actors" in VTK). Jerzy Karczmarczuk Caen, France ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: ideas for compiler project
One thing I would very much like to see done in a functional language is fault-tree analysis. A fault tree has as nodes various undesirable events, with as top node some disaster (for example, nuclear reactor meltdown) and as leaves various faults which can occur, with their probabilities (valve fails, operator doesn't notice red flashing light, and so on). Each internal node has a logical operator (often just AND or OR) associated with it; the fault associated with that node cannot happen unless the operator applied to the children nodes occurs; EG ("the water pressure won't get too high unless the valve fails AND the operator doesn't notice the red flashing light"). Thus you can (assuming all the faults at the leaves are independent) bound the probability of the top node happening by bounding the probability of a particularly horrendous logical expression in a number of independent binary variables being true. Unfortunately working out the probability is extremely hard to do (it's #P-complete) and simulation (running it lots of times) is not accurate since the top probability is (hopefully) extremely low, and so hard to estimate accurately without an awful lot of tests. So the commercial software for this (extremely important) problem has to adopt a very large number of heuristic approaches, for example trying to split the problem into smaller parts which only have a few nodes in common, trying to come up with a set of "cut sets" of faults, such that the top event cannot occur unless at least one of the cut sets has all faults occurring, and so on. There are lots of potential logical reorderings you can attempt on the tree to try to simplify things. The challenge is to come up with a reasonable way of finding a good upper bound for the top probability, while keeping your code sufficiently simple that those of us who live near nuclear reactors can trust it. I think my approach would be to regard this as a graph transformation problem where the aim is to transform the input fault tree into one of a simpler form (where it is not too hard to bound the top probability) by a number of well-defined operations guaranteed not to decrease the top probability. The heuristic part of the program (the largest part) would use a number of ad hoc methods (such as simulations and attempting to compute partial derivatives in terms of node probabilities) to search for the reduction which increases the top probability the least; it would hand over the result of its search to a verification part which would compute the actual bound, so that only the verification part would have to be guaranteed bug-free. Has anyone done anything like this already, by the way? There ought to be money in it . . . ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: ideas for compiler project
Simon: >Lots of people have observed that Haskell might be a good "scripting >language" for numerical computation. In complicated numerical >applications, the program may spend most of its time in (say) matrix >multiply, which constitutes a tiny fraction of the code for the >application. So write the bulk of the application in Haskell (where the >logic is complex but the performance doesn't matter) and then link to a >C or Fortran library to do the small part that is really compute >intensive. >More concretly, suppose you built an FFI interface to BLAS or some >readily available numerical library, and then demonstrated the utility >of the resulting beast by writing some non-trivial numerical application >in it. This is an interesting idea that I think can be made to work to some extent. One issue will be to find the right tradeoff between stateful and purely functional, i.e., allowing nice matrix-algebra based expressions while ensuring that memory can be reused to the extent necessary. If all matrix operations are wrapped up in some state monad, then you lose the nice matrix algebra style. If, on the other hand, you have no way to express updating, then you lose memory reuse (unless you have a *really* smart compiler...). The best tradeoff is probably to mimic a small imperative language with clean, side effect-free matrix operations, where updates are effectuated by explicit matrix assignments. Of course, this is possible only if the BLAS routines are side effect-free themselves our can be wrapped up as such. It is interesting to note that MATLAB started out exactly as a convenient scripting language for a Fortran matrix library. MATLAB is a system for matrix computing that contains nice graphical possibilities and a matrix programming language. It is very popular in engineering, and is widely used for applications like signal processing, automatic control, and PDE solving, in particular in the early prototyping phase. Its popularity does not stem from its speed, since it's quite slow, but rather from: - nice graphics capabilities (plottings, etc) - convenient matrix language (to some extent, I would add), and - above all, lots of "toolboxes", i.e., MATLAB software wrapped up for various applications (simulation of control systems/signal processing, etc.) There is a big third-party market for this kind of software. I think MATLAB's matrix language provides about the right level of abstraction for a high-level matrix language. You can for instance write things like Y = inv(A)*B to assign to Y the solution of Ax = B. To some extent this is what you'd like to have, On the other hand, a closer look at the language will reveal that it has many strange and ugly semantical corners, where the imperativeness shines through in nasty ways. This is a hindrance to optimizing compilation, since good optimization of matrix code requires extensive reordering transformations. Also, MATLAB is very ill-suited to expressing block-recursive matrix algorithms, which are becoming increasingly important in numerical computing. And, of course, there is no decent type system, no higher order functions, etc... So, I think it could be interesting to embed a purified version of MATLAB's matrix language in Haskell. This would be interesting as a demonstration of what a good matrix language could look like, but don't expect to be able to sell it to the world since the competitors are far ahead in all other aspects. >You'd need to find a "real" application though. There are plenty. Signal processing, automatic control, PDE solvers,... >The classical "kernels" >(matrix multiply, inversion etc) are precisely the things you may not >want to do in Haskell. With the current compiler technology for Haskell, one would add. I don't think it would be impossible to compile such Haskell programs into efficient code. Functional languages for matrix/array computing was a quite active research area 10-15 years ago, with efforts like Sisal and Id. These languages were strict and first order, but you can write such programs in Haskell. I think it would be possible to have a Haskell compiler that could manage a subset of Haskell matrix programs quite well. (Christoph Herrmann did some interesting work in this direction: Christoph, you're reading this list aren't you?) Whether it would be worth the (big) effort is another matter. Björn Lisper ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: ideas for compiler project
Lots of people have observed that Haskell might be a good "scripting language" for numerical computation. In complicated numerical applications, the program may spend most of its time in (say) matrix multiply, which constitutes a tiny fraction of the code for the application. So write the bulk of the application in Haskell (where the logic is complex but the performance doesn't matter) and then link to a C or Fortran library to do the small part that is really compute intensive. More concretly, suppose you built an FFI interface to BLAS or some readily available numerical library, and then demonstrated the utility of the resulting beast by writing some non-trivial numerical application in it. You'd need to find a "real" application though. The classical "kernels" (matrix multiply, inversion etc) are precisely the things you may not want to do in Haskell. Simon | -Original Message- | From: Hal Daume III [mailto:[EMAIL PROTECTED]] | Sent: 22 January 2002 22:00 | To: Haskell Mailing List | Subject: ideas for compiler project | | | Hi All, | | I'm currently taking a class in compiler optimization for | high performance computing (i.e., parallel architectures, | including VLIW, FPGA, multimedia extension architectures, | systems-on-a-chip, etc). It's the second of two graduate | level compiler courses, and it purely project based. And the | project is of our choosing. | | The professor has made some suggested projects, which I could | do, but none of them are really FP specific. I'm curious if | anyone has any ideas of a project I could do. I'm looking | for something that's open, yet | constrained, etc (standard semester-long-project-course | stuff), hopefully having to do with optimizations | specifically for FPLs, or at least ones that would benefit them. | | I'm open to all ideas... | | - Hal | | -- | Hal Daume III | | "Computer science is no more about computers| [EMAIL PROTECTED] | than astronomy is about telescopes." -Dijkstra | www.isi.edu/~hdaume | | | ___ | Haskell mailing list | [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell | ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell