[Haskell-cafe] TFP 2007: Last Call for Participation

2007-02-28 Thread TFP 2007
Dear Colleagues,

The deadline to register for TFP 2007 is quickly approaching. You may 
register at: http://cs.shu.edu/tfp2007/registration.html . Also, please be 
advised that the deadline to make hotel reservations at the guaranteed 
rate for TFP 2007 participants is also quickly approaching. Hotel 
information can be found at: http://cs.shu.edu/tfp2007/accomodations.html 
.

You may browse the program and schedule for TFP 2007 at: 
http://cs.shu.edu/tfp2007/schedule.html .

We look forward to seeing you in NYC!!!

Cheers,

Marco


Dr. Marco T. Morazan
TFP 2007
Program Committee Chair
http://cs.shu.edu/tfp2007/___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Logic programming using lazy evaluation

2007-02-28 Thread Henning Thielemann

On Tue, 27 Feb 2007, Chris Kuklewicz wrote:

  Accessing variable values by integer identifiers means that the garbage
  collector cannot free values that are no longer needed.

 That will always be true for potentially non-finite lists of equations.

Here is some implementation that creates and solves infinitely many
equations - of course, it cannot check if found variable substitutions
satisfy all equations. However this implementation still blows up the
heap, because every variable needs one more iteration of the solution
algorithm.


{- |
We want to solve a system of simple equations,
where it is only necessary to derive
a value from an equation where all but one variables are determined. Say

@
1+2=x  -- add 1 2 x
y*z=20 -- times y z 20
x+y=5  -- add x y 5
@

should be solved as

@
x=3
y=2
z=10
@

However different from usual logic programming,
I'm not interested in multiple solutions, I expect only one.
If a variable is underdetermined,
e.g. if the only rule containing @x@ is @add x x 3@,
it shall be evaluated to undefined.
If a variable is overdetermined,
that is different rules lead to different values for that variable,
one of these values shall be returned.
Checking for consistency per variable would require processing the whole
(possibly infinite) list of rules.
Instead one could check consistency per rule.
That is, the solver should return a list of Bools
which indicates which rules could be satisfied.
In this setting it is possible to solve the equations lazily.

We use a naive algorithm, but the crux is that we implement it lazily.

In each step we scan all equations and determine the values for variables
which can be determined immediately.
We repeat this infinitely often.

Consider the system
@
x+y=3
y*z=6
z=3
@
then we get the following iteration results for @(x,y,z)@:
@
[(Nothing, Nothing, Nothing),
 (Nothing, Nothing, Just 3),
 (Nothing, Just 2,  Just 3),
 (Just 1,  Just 2,  Just 3),
 (Just 1,  Just 2,  Just 3),
 ...
@
However, since we organize the iteration per variable,
we obtain
@
x = [Nothing, Nothing, Nothing, Just 1, ...
y = [Nothing, Nothing, Just 2, ...
z = [Nothing, Just 3, ...
@



Features:
 * free choice of types of values and static type checking
 * free choice of rules
 * lazy evaluation of solutions,
   thus infinitely many variables and rules are possible
   (although slow)
-}

module UniqueLogic where

import Control.Monad (liftM, liftM2, msum, mplus)
import Data.Maybe (catMaybes)
import Data.List (transpose)

{- * Basic routines -}

newtype Variable a = Variable {listFromVariable :: [Maybe a]}
   deriving Show

constant :: a - Variable a
constant = Variable . repeat . Just

rule2 :: (b - a) - (a - b) -
   Variable a - Variable b -
   (Variable a, Variable b)
rule2 getA getB (Variable as) (Variable bs) =
   (Variable $ map (liftM getA) bs,
Variable $ map (liftM getB) as)

-- rule3 :: ((a,b,c) - (a,b,c)) -
rule3 :: (b - c - a) - (c - a - b) - (a - b - c) -
   Variable a - Variable b - Variable c -
   (Variable a, Variable b, Variable c)
rule3 getA getB getC (Variable as) (Variable bs) (Variable cs) =
   (Variable $ zipWith (liftM2 getA) bs cs,
Variable $ zipWith (liftM2 getB) cs as,
Variable $ zipWith (liftM2 getC) as bs)


{- |
The following routine works only for finite lists,
that is for a finite number of variable usages
(e.g. finite number of equations.
-}
alternatives :: [Variable a] - Variable a
alternatives =
   Variable . scanl mplus Nothing . map msum . transpose . map
listFromVariable
--   Variable . (Nothing:) . map msum . transpose . map listFromVariable


{- |
Computing the value of a variable
means finding the first time where the variable could be determined.
-}
solve :: Variable a - a
solve (Variable a) = head $ catMaybes a



{- * Example rule generators -}

equal :: (Num a) =
   Variable a - Variable a -
   (Variable a, Variable a)
equal = rule2 id id

-- | @a+b=c@
add :: (Num a) =
   Variable a - Variable a - Variable a -
   (Variable a, Variable a, Variable a)
add = rule3 subtract (-) (+)

-- | @a*b=c@
times :: (Fractional a) =
   Variable a - Variable a - Variable a -
   (Variable a, Variable a, Variable a)
times = rule3 (flip (/)) (/) (*)



{- * Example equation system -}

{- |
@
x=1
y=2
z=3

x+y=3
y*z=6
z=3
@
-}
example :: (Double,Double,Double)
example =
   let (x0,y0,_) = add x y (constant 3)
   (y1,z0,_) = times y z (constant 6)
   (z1,_)= equal z (constant 3)
   (x1,y2,_) = add x y (constant 3)  -- duplicate rule
   x = alternatives [x0,x1]
   y = alternatives [y0,y1,y2]
   z = alternatives [z0,z1]
   in  (solve x, solve y, solve z)



{- |
A variant of 'zipWith' which does not check for the end of lists
and thus can generate infinite lists in a tied knot.
-}
zipWithInf :: (a - b - c) - [a] - [b] - [c]
zipWithInf f =
   let recurse ~(x:xs) ~(y:ys) = f x y : recurse xs ys
   in  recurse

{- |
An infinite equation system.
@
x0 = 0
x0+1 = x1
x1+1 = x2
x2+1 = x3
x3+1 = x4
...
@

In principle, 

Re: [Haskell-cafe] Logic programming using lazy evaluation

2007-02-28 Thread Henning Thielemann

On Tue, 27 Feb 2007, Chris Kuklewicz wrote:

 For an infinite number of equations you have to generate them as data at run
 time.  Your notation above only works for a finite set of equations known at
 compile time.

 So you have a stream of equations, and each equation depends on some subset of
 the variables seen so far and may also introduce new variables for the first 
 time.

 As equations are read it will become possible to solve for variables, so there
 is an evolving environment of solved variables as you read the equation 
 stream.

 You can never free up the old variables since you cannot prove that future
 equations will not use them.

Since the program, which generates the equations is finite, it may well be
possible to see - even for the garbage collector - that some variables are
no longer used.

 After each step the environment of variables and equations will be updated, 
 and
 if solving a set of equations found no solution then the whole set is
 inconsistent and you can abort.  If solving a set of equations gives multiple
 answers (x*x==4) then you could have parallel sets of variable assignments, 
 or a
 heuristic to pick only one member of that set of sets.

For me it is ok to consider x*x==4 as unsolveable, if there is no other
equation where x can be derived from.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using loop fusion for writing efficent high-level code Re[2]: [Haskell] [Fwd: Re: Computer Language Shootout]

2007-02-28 Thread Bryan O'Sullivan

Bulat Ziganshin wrote:


can you please provide examples of such code?


I'd recommend taking a look at the new binary package.  It's very 
cleanly written, and mostly easy to understand.  It's also easy to see 
where the optimisations have been made.  The only part that might induce 
sudden cranial expansion is the Builder monoid, which constructs a big 
delayed computation to efficiently fill out a lazy ByteString when the 
Put monad is run.


The optimisations haven't perverted the readability of the code much, 
but it's still quite fast.  I've clocked end-to-end serialisation and 
deserialisation over an Infiniband network at 234 MB/sec (~25% of line 
rate).  This consumed about 90% of one CPU on the sending side, while 
the receiving side was 100% pegged.


b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread Chad Scherrer

Does anyone know of any Haskell code for numeric minimization? I was
thinking conjugate gradient would be good, but at this point I'd be
happy with anything.

I've found some code written by Tomasz Cholewo at
http://ci.uofl.edu/tom/software/Haskell/
but it requires importing his Arr.lhs library, which is not publicly
available.

The only other thing I've been able to dig up is this
www.st.cs.ru.nl/papers/1997/serp97-cgfunctional.ps.gz
which suggests Haskell is slow for such problems. I suspect this was
an implementation issue, so I don't think their code would be very
helpful (though it would be nice to tidy it up and demonstrate the
improvement - could it beat the Clean implementation they give?)

The other possibility I was considering was using Alberto Ruiz's
wrapper for the GSL library
http://dis.um.es/~alberto/GSLHaskell/
The only problems with this are (1) requires having GSL available, so
it's not as portable, and (2) does everything in terms of lists, which
requires a lot of translations to and from lists (I'm using mutable
arrays).

If there's nothing already written that works together, one of these
should give me a start, but I'd like to avoid reinventing the wheel if
possible.

Thanks!

--

Chad Scherrer

Time flies like an arrow; fruit flies like a banana -- Groucho Marx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread David Roundy
On Wed, Feb 28, 2007 at 10:58:30AM -0800, Chad Scherrer wrote:
 Does anyone know of any Haskell code for numeric minimization? I was
 thinking conjugate gradient would be good, but at this point I'd be
 happy with anything.

Conjugate gradients shouldn't require more than about five lines of code
(which I don't have time to write at the moment), apart from array code for
the vectors you wish to minimize.

I'd recommend considering GHC.PArr, if you don't need more than just
minimization.  If you include {-# OPTIONS -fparr -fglasgow-exts #-}
at the top of your file, you'll also get a fancy array syntax.  Manuel is
currently working on parallelization of arrays using this API, so this
would be a good API to use, moving forward.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Hi can u explain me how drop works in Haskell

2007-02-28 Thread Fernando Gonzalez
iliali16 iliali16 at gmail.com writes:

 
 
 Hi I am trying to implement the function drop in haskell the thing is that I
 I have been trying for some time and I came up with this code where I am
 trying to do recursion:
 
 drop :: Integer - [Integer] - [Integer]
 drop 0 (x:xs) = (x:xs)
 drop n (x:xs)
   |n  lList (x:xs) = dropN (n-1) xs :  
   |otherwise = []
 
 So I want to understand how would this work and what exacttly should I put
 as an answer on line 4 couse that is where I am lost. I know I might got the
 base case wrong as well but I don't know what to think for it. I have done
 the lList as a function before writing this one. Thanks to those who can
 help me understand this. Thanks alot in advance! Have a nice day!


I'm curious as to the call to dropN. If you're trying to do recursion then you
should probably call your original drop function (maybe your drop function was
supposed to be called dropN?).

Anyway, I think if you're only testing a single function and you're trying to
come to grips with recursion and other fundamentals of functional programming,
then sidetracking to become familiar with a testing suite such as Quickcheck or
HUnit might be overkill. I prefer the method used by Thomas Hartman in his 
reply.

However, it should be tweaked a little to really solidify the definition and
verification of your drop function:

drop :: Integral t = t - [a] - [a]
drop _ [] = []-- extra
drop 0 (x:xs) = x:xs
drop n (x:xs) = drop (n-1) xs

main = test
test = do print test1
  print test2
  print test3
  print test4 -- extra

test1 = drop 3 [1,2,3] == []
test2 = drop 2 [1,2,3] == [3]
test3 = drop 0 [1,2,3] == [1,2,3]
test4 = drop 4 [1,2,3] == []  -- extra

Making your function polymorphic over the Integral class for it's first
parameter would keep it working properly whether it receives an Int or an
Integer, a useful property when your function is called by other functions as
it's one less thing to keep track of.

The extra case in the definition keeps the function from failing if n is greater
than 'length (x:xs)'. The extra test (test4) verifies this property.

Fernando Gonzalez

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread Chad Scherrer

On 2/28/07, Dan Weston [EMAIL PROTECTED] wrote:

GSL is written in C, and I don't know any language more portable than
that! gsl_vector and gsl_matrix use a continuous block of doubles, so
you can use the FFI to marshall this however you want for efficiency.

I'd stick with GSLHaskell until you're ready to optimize the data
marshalling though.

I like spending my time on interesting things, not reinventing
pre-debugged and efficient libraries. I use GSLHaskell in my work and
have never had a problem.

Dan


That's my preference, too. Have you ever tried GSLHaskell on MS
Windows? I do most of my work on Linux, but a guy I'm working with
uses MS, and I've heard cygwin can be a huge pain.

I have a big space leak right now I thought might be because of list
laziness in the interface, but that should be squashable with a little
work, and is not as big a deal as having lots of dependencies when
passing code around. I only really need one function from GSL, and the
odds of someone having written a work-alike in Haskell seemed pretty
good.

Of course, in cases where GSL is already installed, or where more of
its functionality is needed, GSLHaskell is the obvious choice.

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using loop fusion for writing efficent high-level code Re[2]: [Haskell] [Fwd: Re: Computer Language Shootout]

2007-02-28 Thread Duncan Coutts
On Wed, 2007-02-28 at 09:51 -0800, Bryan O'Sullivan wrote:
 Bulat Ziganshin wrote:
 
  can you please provide examples of such code?
 
 I'd recommend taking a look at the new binary package.  It's very 
 cleanly written, and mostly easy to understand.  It's also easy to see 
 where the optimisations have been made.  The only part that might induce 
 sudden cranial expansion is the Builder monoid, which constructs a big 
 delayed computation to efficiently fill out a lazy ByteString when the 
 Put monad is run.
 
 The optimisations haven't perverted the readability of the code much, 
 but it's still quite fast.  I've clocked end-to-end serialisation and 
 deserialisation over an Infiniband network at 234 MB/sec (~25% of line 
 rate).  This consumed about 90% of one CPU on the sending side, while 
 the receiving side was 100% pegged.

Yes, we've optimised the writing side much more than the reading side at
the moment. I'm sure it's possible to get the reading up to at least the
same speed. I think we ought to be able to push both further after that
because we're still not doing the bounds check commoning up
transformation that we'd planned on (more GHC rules). So given enough
hacking time there's more performance available.

In our benchmarks reading is currently about 40% of writing speed and
we're topping out at about 30-40% of maximum memory bandwidth for
writes. Apparently that's only 200x faster than the faster of two common
python serialisation libs, so we've got some way to go yet.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Code and Perf. Data for PrimeFinders(was:Genuine Eratosthenes sieve)

2007-02-28 Thread Claus Reinke

Melissa, thank you for taking my comments as constructively as they were meant.

and not to worry, i'm sure i'm not the only one who enjoyed this little 
exercise and
learned a lot from it!-) and even if we've found more that you expected, you're 
quite
right, one doesn't get anywhere unless someone starts, noticing that something
interesting is going on, raising questions about it, supplying the first 
answers, and
documenting the results when the discussion quiets down.


I think modulus is a bit of a red herring -- its a symptom, not the  disease.


you mean i'm distracting the primates with smoked fish?-) perhaps you're right,
but let me illustrate what i meant anyway. assuming that mod is not a primitive,
but has to be computed some way, perhaps by a fragment of gcd, here's a look
at the recursion for computing mod x y:

   mods a b | ba   = [(a,b)]
| otherwise = (a,b):mods (a-b) b

now, if we take that folklore code again, and have a look at the sieves for 5 
and 7,
after 70 has already been filtered out by the sieve for 2 (first column has the 
input
list, followed by the unfolded mod-recursion for each element):

   Main putStrLn $ unlines [show $ mods x 5|x-[3,5..90],(x`mod`3)0]
   [(5,5),(0,5)]
   [(7,5),(2,5)]
   ..
   
[(67,5),(62,5),(57,5),(52,5),(47,5),(42,5),(37,5),(32,5),(27,5),(22,5),(17,5),(12,5),(7,5),(2,5)]
   
[(71,5),(66,5),(61,5),(56,5),(51,5),(46,5),(41,5),(36,5),(31,5),(26,5),(21,5),(16,5),(11,5),(6,5),(1,5)]
   ..
   
[(85,5),(80,5),(75,5),(70,5),(65,5),(60,5),..,(20,5),(15,5),(10,5),(5,5),(0,5)]
   
[(89,5),(84,5),(79,5),(74,5),(69,5),(64,5),(59,5),..,(19,5),(14,5),(9,5),(4,5)]

   Main putStrLn $ unlines [show $ mods x 7|x-[3,5..90],(x`mod`3)0,x`mod`50]
   [(7,7),(0,7)]
   [(11,7),(4,7)]
   ..
   [(67,7),(60,7),(53,7),(46,7),(39,7),(32,7),(25,7),(18,7),(11,7),(4,7)]
   [(71,7),(64,7),(57,7),(50,7),(43,7),(36,7),(29,7),(22,7),(15,7),(8,7),(1,7)]
   [(73,7),(66,7),(59,7),(52,7),(45,7),(38,7),(31,7),(24,7),(17,7),(10,7),(3,7)]
   
[(77,7),(70,7),(63,7),(56,7),(49,7),(42,7),(35,7),(28,7),(21,7),(14,7),(7,7),(0,7)]
   
[(79,7),(72,7),(65,7),(58,7),(51,7),(44,7),(37,7),(30,7),(23,7),(16,7),(9,7),(2,7)]
   
[(83,7),(76,7),(69,7),(62,7),(55,7),(48,7),(41,7),(34,7),(27,7),(20,7),(13,7),(6,7)]
   
[(89,7),(82,7),(75,7),(68,7),(61,7),(54,7),(47,7),(40,7),(33,7),(26,7),(19,7),(12,7),(5,7)]

we find that 70 is indeed revisted by both (eg, 85 and 77), even though it is 
no longer
in the input list (first columns).

by aligning the mod-recursion with going through the list of prime candidates,
we save ourselves a lot of recomputation by sharing, eg the recursion for mod 
91 7
overlaps with, and can share that of mod 77 7, and so on

   Main mods 91 7
   
[(91,7),(84,7),(77,7),(70,7),(63,7),(56,7),(49,7),(42,7),(35,7),(28,7),(21,7),(14,7),(7,7),(0,7)]

which is what i meant with incrementally computing the modulus (if i know the 
result of
mod 77 7, i can use that to compute mod 91 7), instead of doing it from scratch 
for each
candidate. both versions have to pass by 70 at least three times, even though 
the first filter
is sufficient to remove that number as a candidate, again in both versions.

different people think differently, and i found the idea of incremental vs 
from-scratch
modulus helpful in explaining the performance differences, and the idea of 
explicit vs
implicit sieves helpful in explaining the resource usage difference between the 
folklore
and the so-called naive version. but others might find the cross-out footprint 
more
helpful.

going beyond simple sieves, wheel sieves are all about using the advantages of
incremental computations, but they still make their wheels explicit, and suffer 
resource
usage problems, which indirectly harm their performance. i still suspect that 
there's an
incremental and implicit variant of the incremental, but explicit wheel sieve.


[transforming sieve variants into each other]
At some point in the middle, you'll have something that can be viewed  from both perspectives. 
But the existence of such a progression of  algorithms between algorithm A and algorithm B doesn't 
mean that A  and B are fundamentally the same.


i would hope that one of two things would happen: either one could get from A
to B via equivalence-preserving transformations (for a suitably chosen form of
equivalence), in which case A and B would be equivalent, or one would need
to do non-equivalence-perserving transformations at some point in the middle,
clearly exposing the implementation decisions that distinguish B from A (if B is
simply more concrete than A, prescribing efficient behaviour where A just allows
it, then we are into the subject of refinement).


Perhaps we're saying the same thing and misunderstanding each other,


quite possibly, so i'll stop here, with a closing remark

From an operational behavior standpoint, the ``folklore sieve'' just  doesn't do the same thing. 
Every expectation people have about the  behavior of the Sieve of 

[Haskell-cafe] Re: Using loop fusion for writing efficent high-level code Re[2]: [Haskell] [Fwd: Re: Computer Language Shootout]

2007-02-28 Thread Duncan Coutts
On Wed, 2007-02-28 at 16:38 +0300, Bulat Ziganshin wrote:
 Hello Duncan,
 
 Tuesday, February 27, 2007, 1:09:53 PM, you wrote:
 
 can you please provide examples of such code? the ultimate goal is to
 have tutorial on writing efficient code in high-level manner, but that
 is too ambitious. just some examples with a little explanation of how
 this is compiled?

The main example of course is ByteString fusion as presented in our
recent paper:

http://www.cse.unsw.edu.au/~dons/papers/CSL06.html

The first example in the paper is the simple pipeline:

return · foldl hash 5381 · map toLower · filter isAlpha = readFile f
   where hash h c = h ∗ 33 + ord c


This is fairly fast when just using the ordinary ByteString
implementations of foldl map filter etc but the naive C version is still
faster. That is, just composing highly tuned low level implementations
of list like combinators is not enough.

We do not want to the python approach of stitching together fast C/C++
code with slow glue code. Instead we use the above more like a spec and
try to generate high performance low level code.

We transform each of the functions foldl, map  filter into their stream
style counterparts, fuse them together and inline everything (the
compiler also does some other critical optimisations like case-of-case)
to get a single imperative loop. The result is competitive with C,
indeed we have to rewrite the C code into an even uglier form to be able
to beat the Haskell version.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


bytestring vs polymorphic contiguous lists Re: [Haskell-cafe] Re: Using loop fusion for writing efficenthigh-level code Re[2]:[Haskell] [Fwd: Re: Computer Language Shootout]

2007-02-28 Thread Claus Reinke

The main example of course is ByteString fusion as presented in our recent 
paper:
http://www.cse.unsw.edu.au/~dons/papers/CSL06.html


btw, why did you restrict yourself to improving [Char], rather than [a]?

naively, it would seem to me that most of the framework should work
just as well for the general case, with some additional improvements
through specialising a to Char. and if that is the case, it hurts to think
that there is a nice framework out there that i can't use unless my a is Char.

claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: bytestring vs polymorphic contiguous lists Re: [Haskell-cafe] Re: Using loop fusion for writing efficenthigh-level code Re[2]:[Haskell] [Fwd: Re: Computer Language Shootout]

2007-02-28 Thread Donald Bruce Stewart
claus.reinke:
 The main example of course is ByteString fusion as presented in our recent 
 paper:
 http://www.cse.unsw.edu.au/~dons/papers/CSL06.html
 
 btw, why did you restrict yourself to improving [Char], rather than [a]?
 
 naively, it would seem to me that most of the framework should work
 just as well for the general case, with some additional improvements
 through specialising a to Char. and if that is the case, it hurts to think
 that there is a nice framework out there that i can't use unless my a is 
 Char.
 
 claus

Fear not!

Arbitrary arrays of 'a' are in the works, thanks to Roman's data
parallel Haskell work.

There's also already Storable a = Vector a, written for last year's
SoC. Just needs some polishing. We should get around to that soon, I
think.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread ajb
G'day all.

Quoting David Roundy [EMAIL PROTECTED]:

 Conjugate gradients shouldn't require more than about five lines of code
 (which I don't have time to write at the moment), apart from array code for
 the vectors you wish to minimize.

I apologise for the quality of the code, but here's an implementation
of the Marquardt-Levenberg algorithm (which is actually a smooth
interpolation between the conjugate gradient and Newton-Raphson methods),
which I had to do a year or so ago:

http://andrew.bromage.org/darcs/misc/Marquardt.hs

It even includes code to compute standard errors on the parameters.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] numeric minimization in Haskell

2007-02-28 Thread David Roundy
On Wed, Feb 28, 2007 at 02:54:06PM -0800, Dan Weston wrote:
 GSL is written in C, and I don't know any language more portable than 
 that! gsl_vector and gsl_matrix use a continuous block of doubles, so 
 you can use the FFI to marshall this however you want for efficiency.
 
 I'd stick with GSLHaskell until you're ready to optimize the data 
 marshalling though.
 
 I like spending my time on interesting things, not reinventing 
 pre-debugged and efficient libraries. I use GSLHaskell in my work and 
 have never had a problem.

It seems like GSLHaskell is an awfully big stick, when all you need are
scalar multiply and vector addition.  Of course, we don't know what
functions he wants to minimize, but in the absence of any need for GSL
functions, I don't see a good reason for it.

I see that GSLHaskell has a binding to a conjugate gradients minimizer, but
it's not useful for any hard problems (it stores the trajectory, which
defeats the purpose of using conjugate gradients), and can only be very,
very slow.

From the API alone it cannot be efficient.  Code that is written by people
who obviously either don't know or don't care about efficiency is just not
in general a good idea.  I don't know what you use GSLHaskell for in your
work, but I hope you don't use it for conjugate gradients, or only use it
on easy problems.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] State monad in the wikibood article

2007-02-28 Thread TJ

In the wikibook article here:
http://en.wikibooks.org/wiki/Haskell/Understanding_monads, which
really does an excellent job explaining things (nuclear waste
woohoo!), I am stuck at the following code snippet:

container = fn =
\st - let (a, st2)   = container st
   container2 = fn a
   in  container2 st2

What stumps me is that (=) is supposed to return a container, but if
we return (container2 st2) as in the code, then what we're really
returning is the contents of the container! So what would happen if we
do this:

nuclearWasteInContainer = processTheWaste = thoroughProcessTheWaste

It seems to me that the second (=) in the above expression would
have the arguments (nuclearWaste) and (nuclearWasteProcessor), when
what it really expects are (Container nuclearWaste) and
(nuclearWasteProcessor). So isn't something wrong with the definition
of (=) above? Or am I missing something?

(I know the article says that the type for their supposed State monad
at that point is not actually correct, and will be clarified further
on, but that seems to be irrelevant to my question.)


TJ the forever noobie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] State monad in the wikibood article

2007-02-28 Thread Matthew Brecknell
TJ [EMAIL PROTECTED] said:
 In the wikibook article here:
 http://en.wikibooks.org/wiki/Haskell/Understanding_monads, which
 really does an excellent job explaining things (nuclear waste
 woohoo!), I am stuck at the following code snippet:
 
 container = fn =
  \st - let (a, st2)   = container st
 container2 = fn a
 in  container2 st2
 
 What stumps me is that (=) is supposed to return a container, but if
 we return (container2 st2) as in the code, then what we're really
 returning is the contents of the container!

Note the lambda abstraction (\st - ...) at the beginning of the
definition. This means that (container = fn) returns a *function* that
maps an input state to the result of (container2 st2). It doesn't return
the result of (container st2) directly.

 So what would happen if we do this:
 
 nuclearWasteInContainer = processTheWaste = thoroughProcessTheWaste
 
 It seems to me that the second (=) in the above expression would
 have the arguments (nuclearWaste) and (nuclearWasteProcessor), when
 what it really expects are (Container nuclearWaste) and
 (nuclearWasteProcessor). So isn't something wrong with the definition
 of (=) above? Or am I missing something?
 
 (I know the article says that the type for their supposed State monad
 at that point is not actually correct, and will be clarified further
 on, but that seems to be irrelevant to my question.)
 
 TJ the forever noobie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe