Re: [Haskell-cafe] Copy on read

2008-05-13 Thread Dan Piponi
On Sat, May 10, 2008 at 7:20 AM, Neil Mitchell <[EMAIL PROTECTED]> wrote:
>  Jurriaan Hage and Stefan Holdermans. Heap recycling for lazy
>  languages. In John Hatcliff, Robert Glück, and Oege de Moor, editors,
>  _Proceedings of the 2008 ACM SIGPLAN Symposium on Partial Evaluation
>  and Semantics-Based Program Manipulation_, PEPM'08, San Francisco,
>  California, USA, January 7--8, 2008, pages 189--197. ACM Press, 2008.
>  http://doi.acm.org/10.1145/1328408.1328436.

If I'm reading it aright, it works by tagging the types of values as
unique or not but keeps that annotation secret from the programmer.
The typing rules are incredibly complicated so to make things less
scary, they are also hidden from the user. As a result, this idea
makes it impossible for a developer to reason about whether their code
will compile. That doesn't seem like a viable solution. Have I read
this correctly?

Cute idea though.
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Jeff Polakow
Hello,

> I offer up the following example:
> 
This is an instructive example.

>   mean xs = sum xs / length xs
> 
In order to type-check, I actually need to write something like:

mean xs = sum xs / fromIntegral (length xs)

There are other ways of get the numeric types to match correctly, but this 
is fairly general. 

Then, I immediately blow my stack if I try something like: 

mean [1..10].

The culprit is actually sum which is defined in the base libraries as 
either a foldl or a direct recursion depending on a compiler flag. In 
either case, the code is not strict enough; just trying to compute:

 sum [1..1000] 

blows the stack. This can be easily fixed by defining a suitable strict 
sum:

sum' = foldl' (+) 0

and now sum' has constant space. We could try to redefine mean using sum':

mean1 xs = sum' xs / fromIntegral (length xs)

but this still gobbles up memory. The reason is that xs is used twice and 
cannot be discarded as it is generated. So we must move to a direct fold, 
as you did, to get a space efficient mean.

> If we now rearrange this to
> 
>   mean = (\(s,n) -> s / n) . foldr (\x (s,n) -> let s' = s+x; n' = n+1 
> in s' `seq` n' `seq` (s', n')) (0,0)
> 
> and run the same example, and watch it run in constant space.
> 
This code actually blows the stack on my machine just like the first naive 
mean. Foldl is perhaps more intuitive  to use here, since we are summing 
the numbers as we encounter them while walking down the list, and there is 
a strict version, foldl', provided in the base libraries.

mean2 = uncurry (/) . foldl' (\(s,n) x -> (s+x, n+1)) (0,0)

However, this still gobbles up memory... the reason is that pairs are 
lazy. So we need a way to force the (s+x) and (n+1). An easy, and 
unobtrusive way to do this is to use strict pattern matching:

mean2 = uncurry (/) . foldl' (\(!s, !n) x -> (s+x, n+1)) (0,0)

Now we can run:

mean2 [1..10]

in constant space.

While using an explicit foldl is less readable than the direct version 
(especially to a beginner), it is a standard functional idiom. 
Furthermore, a good understanding of lazy evaluation should be enough to 
guide you to using the strict foldl' and then then strict patterns. Is 
this a reasonable analysis?

Also, we've made no attempt to address speed. However, I would argue that 
the code's performance time is predictable-- it grows linearly with the 
size of the list. Improving the performance time is another matter that 
requires knowing about the internal representation of the various types 
being used.

-Jeff



---

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Don Stewart
> I offer up the following example:
>
>  mean xs = sum xs / length xs
>
> Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)

But you know why, don't you?

>  sat down and spent the best part of a day writing an MD5 
> implementation. Eventually I got it so that all the test vectors work 
> right. (Stupid little-endian nonsense... mutter mutter...) When I tried 
> it on a file containing more than 1 MB of data... o dear...

Did you use Data.Binary or the other libraries for efficient access to 
gigabytes of data?

> Of course, the first step in any serious attempt at performance improvement
> is to actually profile the code to figure out where the time 
> is being spent. 

Well, actually, for our 'mean()' case, it means just using the right structures.
Arrays for example:

import Data.Array.Vector
import Text.Printf

mean :: UArr Double -> Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = n+1 :*: s+a
a :*: b = foldlU k (0 :*: 0) arr :: (Int :*: Double)

main = printf "%f\n" . mean $ enumFromToFracU 1 1e9

For example,

$ time ./A
5.067109e8
./A  3.53s user 0.00s system 99% cpu 3.532 total

Try allocating an array of doubles in C, and getting similar results.
(The compiler is optimising this to:

Main_zdszdwfold_info:
  leaq32(%r12), %rax
  cmpq%r15, %rax
  movq%rax, %r12
  ja  .L10
  movsd   .LC0(%rip), %xmm0
  ucomisd %xmm5, %xmm0
  jae .L12
  movq%rsi, (%rax)
  movq$base_GHCziFloat_Dzh_con_info, -24(%rax)
  movsd   %xmm6, -16(%rax)
  movq$base_GHCziBase_Izh_con_info, -8(%rax)
  leaq-7(%rax), %rbx
  leaq-23(%rax), %rsi
  jmp *(%rbp)
.L12:
  movapd  %xmm6, %xmm0
  addq$1, %rsi
  subq$32, %r12
  addsd   %xmm5, %xmm0
  addsd   .LC2(%rip), %xmm5
  movapd  %xmm0, %xmm6
  jmp Main_zdszdwfold_info

Note even any garbage collection. This should be pretty much the same
performance-wise as unoptimised C.

> almost any nontrivial program you write 
> spends 60% or more of its time doing GC rather than actual work. 

Ok, you're doing something very wrong. GC time is typically less than 15% of 
the running
time of typical work programs I hack on.

I bet you're using lists inappropriately?

> I find it completely impossibly to write code that doesn't crawl along at a
> snail's pace.  Even when I manage to make it faster, I usually have no clue
> why.

I think there is a problem that few people are taking the time to understand
the compilation model of Haskell, while they've had the C runtime behaviour
drilled into their brains since college.

Until you sit down and understand what your Haskell code means, it will be very
hard to reason about optimisations, unfortunately.

GHC does what it does well, and its predictable. As long as you understand 
the operations your trying to make predictions about.

I suggest installing ghc-core, and looking at how your code is compiled.
Try many examples, and have a good knowledge of the STG paper.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Albert Y. C. Lai

Advanced technology ought to look like unpredictable magic.

My experience with lazy evaluation is such that every time a program is 
slower or bulkier than I presumed, it is not arbitrariness, it is 
something new to learn.


My experience with GHC is such that every surprise it gives me is a 
pleasant surprise: it produces a program faster or leaner than lazy 
evaluation would have it. "Where has the box gone?"

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Spencer Janssen
On Mon, May 12, 2008 at 08:01:53PM +0100, Andrew Coppin wrote:
> I offer up the following example:
>
>  mean xs = sum xs / length xs
>
> Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)

I don't see why the performance implications of this program are surprising.
Just ask any programmer used to a strict language how much memory "[1 .. 1e9]"
will require.

> If we now rearrange this to
>
>  mean = (\(s,n) -> s / n) . foldr (\x (s,n) -> let s' = s+x; n' = n+1 in 
> s' `seq` n' `seq` (s', n')) (0,0)
>
> and run the same example, and watch it run in constant space.

This will use linear stack space.  You probably meant to use foldl'?

Better:

mean = uncurry (/) . foldl' f (0, 0)
 where f (!s, !n) x = (s + x, n + 1)

   -- or, if you require Haskell '98:
   f (s, n) x = s `seq` n `seq` (s + x, n + 1)

This version is very legible in my opinion.  In fact, the algorithm is
identical to what I'd write in C.  Also, "mean [1 .. 1e9]" will actually work
in Haskell, while in C you'll just run out of memory.


Cheers,
Spencer Janssen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread J C
On Mon, May 12, 2008 at 4:38 AM, Richard Kelsall
<[EMAIL PROTECTED]> wrote:

>  Hello JC, I think you've set yourself a challenge there :) Welcome to
>  Haskell programming. Taking a Shootout entry and playing with it is
>  a great way to learn Haskell. The Shootout provides an example in your
>  favourite previous language for comparison and a small well defined
>  program with exact test results you can pit your wits against. Fame
>  awaits you for a fast and beautiful entry. I'm still learning useful
>  things from the Fasta benchmark. It's surprising how many interesting
>  things you can discover in a small piece of code.

It may be fun, but I don't think it would be meaningful. My code will
be, most likely, slow, leaving some doubt as to whether it's slow
because of the limitations of the compiler or my inexperience.

On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread Abhay Parvate
Yes, I had always desired that the operator >>= should have been right
associative for this short cut even when written without the 'do' notation.

On Tue, May 13, 2008 at 3:39 AM, John Hamilton <[EMAIL PROTECTED]> wrote:

> I'm trying to understand how short circuiting works with the Maybe monad.
> Take the expression n >>= f >>= g >>= h, which can be written as
> (((n >>= f) >>= g) >>= h) because >>= is left associative.  If n is
> Nothing, this implies that (n >>= f) is Nothing, and so on, each nested
> sub-expression easily evaluating to Nothing, but without there being a
> quick way to short circuit at the beginning.
>
> Now take the example
>
>   do x <- xs
>  y <- ys
>  z <- zs
>  return (x, y, z)
>
> which I believe desugars like
>
>xs >>= (\x -> ys >>= (\y -> zs >>= (\z -> return (x, y, z
>
> Here the associativity of >>= no longer matters, and if xs is Nothing the
> whole expression can quickly be determined to be Nothing, because Nothing
> >>= _ = Nothing.  Am I looking at this correctly?
>
> - John
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neal Alexander
Just a heads up - i only have a month or so experience with Haskell, so 
alot of these issues may be my own fault.


Anyway, the log file that i'm parsing uses English grammar, and the 
performance really dropped just by upgrading to Parsec3. I was hoping to 
use the ByteString support to boost the speed of already slow code, but 
had no such luck. It basicly went from "Ugh, this is kinda slow" to 
"U i'm gonna go grab a burger and let this melt my CPU" haha.


If anything, its probably all the look-ahead the rules have to do to get 
the context specific stuff right.


Some of the code is here: http://hpaste.org/7578

-
#1 . Parsec 2:

total time  =   46.44 secs   (2322 ticks @ 20 ms)
total alloc = 16,376,179,008 bytes  (excl. profiling overheads)

Parse taking 51.3% time and 65.3% alloc.
-
-
#2 . Parsec3
(4 times slower, no code changes):

total time  =  181.08 secs   (9054 ticks @ 20 ms)
total alloc = 46,002,859,656 bytes  (excl. profiling overheads)

Text.Parsec.Prim  Taking 84.7% time and 86.0% alloc.
-
-
#3 . Parsec3 but with the whole project converted to ByteString:
(8 times slower):

total time  =  378.22 secs   (18911 ticks @ 20 ms)
total alloc = 100,051,417,672 bytes  (excl. overheads)
-

The third parse probably isn't a great indicator, since i reverted some 
rule-set optimizations that were causing errors. Plus i ended up packing 
the parsec String results to ByteStrings to fit in with everything else.



I can post the full profiling info if anyone really cares.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Don Stewart
gale:
> Andrew Coppin wrote:
> >  I offer up the following example:
> >
> >   mean xs = sum xs / length xs
> >
> >  Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)
> >
> >  If we now rearrange this to
> >
> >   mean = (\(s,n) -> s / n) . foldr (\x (s,n) -> let s' = s+x; n' = n+1 in s'
> > `seq` n' `seq` (s', n')) (0,0)
> >
> >  and run the same example, and watch it run in constant space.
> >
> >  Of course, the first version is clearly readable, while the second one is
> > almost utterly incomprehensible, especially to a beginner. (It's even more
> > fun that you need all those seq calls in there to make it work properly.)
> 
> You can write it like this:
> 
> mean = uncurry (/) . foldl' (\(s,n) x -> ((,) $! s+x) $! n+1) (0,0)
> 
> I don't think that's so bad. And for real-life examples, you almost
> never need the ($!)'s or seq's - your function will do some kind
> of pattern matching that will force the arguments. So really, all
> you need to remember is: if you're repeating a fast calculation across
> a big list, use foldl'. And insertWith', if you're storing the result in
> a Data.Map. That's about it.
> 
> >  The sad fact is that if you just write something in Haskell in a nice,
> > declarative style, then roughly 20% of the time you get good performance,
> > and 80% of the time you get laughably poor performance.
> 
> I don't know why you think that. I've written a wide variety of functions
> over the past few years. I find that when performance isn't good enough,
> it's because of the algorithm, not because of laziness. Laziness
> works for me, not against me.
> 
> Of course, it depends what you mean by "good performance". I have
> never needed shootout-like performance. But to get that, you need
> some messy optimization in any language.

We can actually get great performance here,

{-# LANGUAGE TypeOperators #-}

import Data.Array.Vector
import Text.Printf

mean :: UArr Double -> Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = n+1 :*: s+a
a :*: b = foldlU k (0 :*: 0) arr :: (Int :*: Double)

main = printf "%f\n" . mean $ enumFromToFracU 1 1e9

ghc -O2

$ time ./A
5.067109
./A  3.69s user 0.00s system 99% cpu 3.692 total

Versus on lists:

import Data.List
import Text.Printf
import Data.Array.Vector

mean :: [Double] -> Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = (n+1 :*: s+a)
(a :*: b) = foldl' k (0 :*: 0) arr :: (Int :*: Double)

main = printf "%f\n" . mean $ [1 .. 1e9]

$ time ./A 
5.067109
./A  66.08s user 1.53s system 99% cpu 1:07.61 total

Note the use of strict pairs. Key to ensuring  the accumulators end up in
registers.The performance difference here is due to fold (and all left
folds) not fusing in normal build/foldr fusion.

The vector version runs about the same speed as unoptimsed C.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Copy on read

2008-05-13 Thread Matthew Naylor
Hi Andrew,

my probably dodgy reason for mentioning deforestation is that sharing
of intermediate values is a major stumbling block; code that uses data
linearly is possibly well suited for deforesting.  See Frankau's SASL
for a language that deforests all lists simply by not letting you copy
them!  (IIRC there is another constraint that forbids accumulating
parameters too.)

> Similarly, there are recursion patterns for which fusion isn't very 
> easy.

Yep, I suspect you're right.

> That's why most array-based code is explicitly in-place.  wouldn't
> it be nice if it didn't have to be?

I agree.  As an aside, DiffArray looks quite nice:

  http://www.haskell.org/haskellwiki/Modern_array_libraries

``if a diff array is used in a single-threaded style, ..., a!i takes
O(1) time and a//d takes O(length d).''

Notice the use of "seq" in 2nd example to enforce a kind of
single-threaded behaviour.  Seems nasty!  I wonder if this could be
avoided by providing a (*!*) such that

  arr *!* i = seq x (arr, x)
where x = arr ! i

It returns a new array which the programmer should use if they want
single-threaded behaviour.

Matt.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Abhay Parvate
I don't know why, but perhaps beginners may expect too much from the
laziness, almost to the level of magic (me too, in the beginning!). In an
eager language, a function like

mean :: (Fractional a) => [a] -> a

expects the *whole* list before it can calculate the mean, and the question
of the 'mean' function consuming memory does not arise. We look for other
methods of finding the mean of very long lists. We do not expect such a
function in C or Scheme to succeed when the number of numbers is more than
that can fit the memory. (It will not even be called; the list creation
itself will not succeed.) Lazy languages allow us to use the same
abstraction while allowing doing more. But it is not magic, it is plain
normal order evaluation. Just as every Scheme programmer or C programmer
must understand the consequences of the fact that the arguments to a
function will be evaluated first, a Haskell programmer must understand the
consequences of the fact that the arguments to a function will be evaluated
only when needed/forced. Perhaps an early emphasis on an understanding of
normal order evaluation is needed while learning Haskell in order to stop
expecting magic, especially when one comes prejudiced from eager languages.

Regards
Abhay
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 0:26, J C wrote:


On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(


Very few of the shootout entries have been revisited since most of the  
improvements to list and stream fusion, etc. in GHC, if I can trust  
the amount of discussion of shootout entries I've seen on IRC.  Some  
of them are still vintage 6.4.2, which had very little in the way of  
compiler smarts so hand optimization was crucial.  6.8.2, on the other  
hand, does much better all by itself as long as you don't e.g. use  
lists in stupid ways.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 12, at 22:18, Jeff Polakow wrote:


Then, I immediately blow my stack if I try something like:

mean [1..10].

The culprit is actually sum which is defined in the base libraries  
as either a foldl or a direct recursion depending on a compiler  
flag. In either case, the code is not strict enough; just trying to  
compute:


 sum [1..1000]


There's also an insufficient-laziness issue with enumerations in at  
least some versions of the standard library, IIRC.  meaning that just  
saying [1..1000] can introduce a space leak that can lead to a  
stack blowout.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

J C wrote:

<[EMAIL PROTECTED]> wrote:


 Hello JC, I think you've set yourself a challenge there :) Welcome to
 Haskell programming. Taking a Shootout entry and playing with it is
 a great way to learn Haskell. The Shootout provides an example in your
 favourite previous language for comparison and a small well defined
 program with exact test results you can pit your wits against. Fame
 awaits you for a fast and beautiful entry. I'm still learning useful
 things from the Fasta benchmark. It's surprising how many interesting
 things you can discover in a small piece of code.


It may be fun, but I don't think it would be meaningful. My code will
be, most likely, slow, leaving some doubt as to whether it's slow
because of the limitations of the compiler or my inexperience.

On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(


Don't tell the experts who wrote the current shootout entries, but the
playing field is tilted radically in favour of us beginners being able
to improve on their entries because of new versions of GHC and new
tools that have been developed since they wrote their entries.

GHC will now automatically perform many of the optimisations that used
to have to be done by hand. For example I was surprised to discover
the other day when working on Fasta that putting this plain and simple
version of splitAt

splitAt n (x : xs) = (x : xs', xs'')
where (xs', xs'') = splitAt (n-1) xs

in my program made it run more quickly than using the built-in version
of splitAt which I now know looks like (ug!) this

splitAt (I# n#) ls
  | n# <# 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# -> [a] -> ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout). You may similarly find various elaborations in the shootout
entries that were historically necessary for speed or memory reasons,
but which can now be removed because GHC or new libraries do the work
for us.

Have a go and see what happens to the speed when you change things
to make N-body more readable. I would bet money on there being simple
tweaks which will make it simultaneously faster and more readable.


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Sterling Clover
Well, it would be meaningful for your own experience in learning Haskell.
Some time ago somebody took a shot at nbodies using pure immutable
structures, and as I recall, got within about 4x the performance of mutable
structures. In 6.6, an STArray based approach was maybe 2x slower, but by
now it may well be comparable, as Dons pointed out. In fact, lots of the
shootout entries could use an overhaul now that 6.8 is running on their
machines, as plenty of older, uglier, optimizations are probably preformed
as efficiently if not more so by the newer GHC, whose strictness analysis,
for example, is consistently and pleasantly surprising.

If you take a stab at some of these benchmarks, with some helpful guidance
from #haskell, you'll no doubt get a much better sense of how memory and
performance works in haskell, and which tweaks are just there for the last
2%. So even if you can't beat the current winners (and given the compiler
changes, you may well be able to) you'll still come out ahead in the
process.

As for the clean entry though, it no doubt relies heavily on uniqueness
typing and so is secretly mutating like crazy under the hood.

Cheers,
S.

On Tue, May 13, 2008 at 12:26 AM, J C <[EMAIL PROTECTED]> wrote:

> On Mon, May 12, 2008 at 4:38 AM, Richard Kelsall
> <[EMAIL PROTECTED]> wrote:
>
> >  Hello JC, I think you've set yourself a challenge there :) Welcome to
> >  Haskell programming. Taking a Shootout entry and playing with it is
> >  a great way to learn Haskell. The Shootout provides an example in your
> >  favourite previous language for comparison and a small well defined
> >  program with exact test results you can pit your wits against. Fame
> >  awaits you for a fast and beautiful entry. I'm still learning useful
> >  things from the Fasta benchmark. It's surprising how many interesting
> >  things you can discover in a small piece of code.
>
> It may be fun, but I don't think it would be meaningful. My code will
> be, most likely, slow, leaving some doubt as to whether it's slow
> because of the limitations of the compiler or my inexperience.
>
> On the other hand, if the experts can't help using malloc, unsafe*,
> global mutables and IO, I'll be able to conclude that this is probably
> what it takes to make Haskell run fast :-(
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Darrin Thompson
On Tue, May 13, 2008 at 2:20 AM, Don Stewart <[EMAIL PROTECTED]> wrote:
>  Note the use of strict pairs. Key to ensuring  the accumulators end up in
>  registers.The performance difference here is due to fold (and all left
>  folds) not fusing in normal build/foldr fusion.
>
>  The vector version runs about the same speed as unoptimsed C.
>

These "tricks" going into Real World Haskell? When you say someone
needs to get familiar with the "STG paper" it scares me (a beginner)
off a little, an I've been making an effort to approach the papers. I
could barely understand the Fusion one and getting familiar with
compiler internals sounds like something I'd not be ready for.
Probably if I really looked at ghc-core I'd be pleasantly surprised
but I'm totally biased against even looking. Gcc is hard to read, thus
ghc is also. So while you are right about all this when you say it, I
think your goal is to persuade. RWH has some of the best practical
prose I've read yet. Well done there. Hopefully chapter 26 will be
crammed full of this stuff?

--
Darrin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread Graham Fawcett
On Mon, May 12, 2008 at 6:09 PM, John Hamilton <[EMAIL PROTECTED]> wrote:
> I'm trying to understand how short circuiting works with the Maybe monad.
>  Take the expression n >>= f >>= g >>= h, which can be written as
>  (((n >>= f) >>= g) >>= h) because >>= is left associative.  If n is
>  Nothing, this implies that (n >>= f) is Nothing, and so on, each nested
>  sub-expression easily evaluating to Nothing, but without there being a
>  quick way to short circuit at the beginning.

Yes, but that's still a 'quick' short-circuiting. In your example, if
'n' is Nothing, then the 'f >>= g >>= h' thunks will not be forced
(thanks to lazy evaluation), regardless of associativity. Tracing
verifies this:

  > import Debug.Trace

  > talk s = Just . (trace s)
  > f = talk "--f"
  > g = talk "--g"
  > h = talk "--h"

  > foo n = n >>= f >>= g >>= h

  *Main> foo (Just 3)
  Just --h
  --g
  --f
  3
  *Main> foo Nothing
  Nothing

I suspect the cost of creating and discarding those unused thunks is
negligible, so in effect the associativity of the bind operator is
irrelevant here.

Graham
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Re: ANN: Haddock version 2.1.0

2008-05-13 Thread Thomas Schilling
Feel free to CC me or the ticket with things like that.  I'll be
working on this for this year's GSoC and it'd be helpful to find out
what I should tackle first.

On Fri, May 9, 2008 at 8:30 PM, Claus Reinke <[EMAIL PROTECTED]> wrote:
>
>
> > Ah, I didn't think about the GHC options that change the lexical
> > syntax. You're right, using the GHC lexer should be easier.
> >
>
>  and, if you do that, you could also make the GHC lexer
>  squirrel away the comments (including pragmas, if they aren't
>  already in the AST) someplace safe, indexed by, or at least annotated with,
> their source locations, and make this comment/
>  pragma storage available via the GHC API. (1a)
>
>  then, we'd need a way to merge those comments and pragmas
>  back into the output during pretty printing, and we'd have made
>  the first small step towards source-to-source transformations: making code
> survive semantically intact over (pretty . parse). (1b)
>
>  that would still not quite fullfill the GHC API comment ticket (*),
>  but that was only a quick sketch, not a definite design. it might be
> sufficient to let each GHC API client do its own search to associate bits of
> comment/pragma storage with bits of AST.
>  if i understand you correctly, you are going to do (1a), so
>  if you could add that to the GHC API, we'd only need (1b)
>  to go from useable-for-analysis-and-extraction to
> useable-for-transformation.
>
>  is that going to be a problem?
>
>  claus
>
>  (*) knowing the source location of some piece of AST is not
>  sufficient for figuring out whether it has any immediately
>  preceding or following comments (there might be other AST
>  fragments in between, closer to the next comment).
>  but, if one knows the nearest comment segment for each piece of AST, one
> could then build a map where the closest
>  AST pieces are mapped to (Just commentID), and the other
>  AST pieces are mapped to Nothing.
>
>  ___
>  Haskell-Cafe mailing list
>  Haskell-Cafe@haskell.org
>  http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Bulat Ziganshin
Hello Richard,

Tuesday, May 13, 2008, 6:10:54 PM, you wrote:

> because I was compiling my splitAt with -O2 optimisation as opposed
> to the built-in version being compiled with -O. The extra optimisations
> in -O2 are a new feature of GHC (and -O2 is slower to compile which is
> why the built-in version doesn't use it, but that doesn't matter for the
> shootout).

-O2 is very old ghc feature and i think that ghc base library is
compiled with -O2 - it's too obvious idea

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,

Tuesday, May 13, 2008, 6:10:54 PM, you wrote:


because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout).


-O2 is very old ghc feature and i think that ghc base library is
compiled with -O2 - it's too obvious idea



In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :

http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html

and from this thread

http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html

it appears to be currently unused for splitAt.

I guess -O2 has however been around for a long time.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Bulat Ziganshin
Hello Richard,

Tuesday, May 13, 2008, 7:56:36 PM, you wrote:

> In July 2007 -O2 was documented in GHC as making no difference to
> the speed of programs :

> http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html

it's because ghc is 15 years old and its documentation may be not
updated as things changes :)

> and from this thread

> http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html

> it appears to be currently unused for splitAt.

i've read this thread. it was just assumption - don't believe it
before you have checked it


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Achim Schneider
"Darrin Thompson" <[EMAIL PROTECTED]> wrote:

> On Tue, May 13, 2008 at 2:20 AM, Don Stewart <[EMAIL PROTECTED]> wrote:
> >  Note the use of strict pairs. Key to ensuring  the accumulators
> > end up in registers.The performance difference here is due to
> > fold (and all left folds) not fusing in normal build/foldr fusion.
> >
> >  The vector version runs about the same speed as unoptimsed C.
> >
> 
> These "tricks" going into Real World Haskell? When you say someone
> needs to get familiar with the "STG paper" it scares me (a beginner)
> off a little, an I've been making an effort to approach the papers. I
> could barely understand the Fusion one and getting familiar with
> compiler internals sounds like something I'd not be ready for.
> Probably if I really looked at ghc-core I'd be pleasantly surprised
> but I'm totally biased against even looking. Gcc is hard to read, thus
> ghc is also. So while you are right about all this when you say it, I
> think your goal is to persuade. RWH has some of the best practical
> prose I've read yet. Well done there. Hopefully chapter 26 will be
> crammed full of this stuff?
> 
You know, sometimes I wish this would be the Eve forums, so that I
could just answer "FAIL".

Anyway, the goal of the Haskell community is to prevent success at any
cost, so anything that is done to ease things for noobs that is not
purely meant to prevent anyone from asking questions will be warded off
by automatic defence systems of the big ivory tower, which are
reinforced faster than you can ever hope to understand any topic.


To get a bit more on-topic: I currently completely fail to implement a
layout rule in Parsec because I don't understand its inner workings,
and thus constantly mess up my state. Parsec's ease of usage is
deceiving as soon as you use more than combinators: Suddenly the
plumbing becomes important, and hackage is full of such things. Haskell
has potentially infinite learning curves, and each one of them
usually represents a wall. To make them crumble, you have to get used to
not understand anything until you understand everything.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,


In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :



http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html


it's because ghc is 15 years old and its documentation may be not
updated as things changes :)


and from this thread



http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html



it appears to be currently unused for splitAt.


i've read this thread. it was just assumption - don't believe it
before you have checked it



Hello Bulat,

Yes it was just a plausible guess, but not contradicted by the experts.
And that was using the Windows version of GHC so other versions may
have better optimisation. I don't know how to check, maybe the experts
can illuminate the subject?


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Analog data acquisition

2008-05-13 Thread Tom Nielsen
Hello,

I would like to use a lazy, purely functional language to create an
experiement description (and execution!) language for cellular
neuroscience, i.e. electrical recordings and stimulation.
Unfortunately, this means I have to talk to a
Analog-to-Digital/Digital-to-Analog converter board, with a precision
of about 16 bit at a rate of 50 kHz. I currently have a National
Instruments M-series board, but would be happy to try another board.
I'm not looking for real-time control right now, but that might be of
interest in the future.

Has anyone looked at creating bindings to the NI-DAQmx drivers or the
open-source COMEDI project, or similar hardware? Would anyone be
interested in helping out with this driver binding?

Regards
Tom
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Instance of Eq on cyclic data structure?

2008-05-13 Thread Leonard Siebeneicher

Hi.

I am testing a winged edge data structure.


module Main where

type Point3 = (Float, Float, Float)

data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
data Face= Face Edge deriving(Eq)
data Edge= Edge  (Vertex, Vertex)
(Face, Face)
(Edge, Edge)
(Edge, Edge) deriving(Eq)
data Vertex= Vertex Point3 Edge deriving(Eq)

...

--- Face, Edge, Vertex: Conversion

getVertsFromFace :: Face -> [Vertex]
getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
where
  process_wing_clockwise (Edge
  (vp, _)
  _
  (_, eWing1)
  _)
  | startEdge == eWing1 =
  [vp]
  | otherwise =
  vp : (process_wing_clockwise eWing1)


--- Misc procedures
printPointList :: [Vertex] -> IO ()
printPointList [] = return ()
printPointList ((Vertex p _ ):vs) = putStr (show p) >>
printPointList vs

--- main
main =
do
  let q = createQuad (0,0,0) (0,1,0) (1,1,0) (1,0,0)
  let f = (\(Body _ _ (face:faces)) -> face) q
  let verts = getVertsFromFace f
  printPointList verts



Here I get a error message.


(0.0,0.0,0.0)(0.0,1.0,0.0)(1.0,1.0,0.0)runhugs: Error occurred

ERROR - Control stack overflow

I think this come because the derived Eq-Instance "==" get into an 
infinite loop


Alternatively I tried following

--- We want to identify each Body, Face, Edge or Vertex by a number ID
type ID = Int  --- A special number for identifying, used for Eq

data Body = Body ID [Vertex] [Edge] [Face]
data Face = Face ID (Edge)
data Edge = Edge ID
(Vertex , Vertex)
(Face , Face)
(Edge , Edge)
(Edge , Edge)
data Vertex = Vertex ID Edge

--- Eq Instances
instance Eq Vertex) where
(Vertex i1 _) == (Vertex i2 _) = i1 == i2

instance Eq Face where
(Face i1 _ ) == (Face i2 _) = i1 == i2

instance Eq Edge where
(Edge i1 _ _ _ _) == (Edge i2 _ _ _ _) = i1 == i2


instance Eq (Body) where
(Body i1  _ _ _) == (Body i2  _ _ _) = i1 == i2

...

This way my code does not hang up. But I have to manage those ugly ID's.
Is there a better way to create instances of class Eq, without something 
like ID?


Thanks for reading.

Best regards, Leonard







 Begin 

module Main where

type Point3 = (Float, Float, Float)

data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
data Face= Face Edge deriving(Eq)
data Edge= Edge (Vertex, Vertex)
(Face, Face)
(Edge, Edge)
(Edge, Edge) deriving(Eq)
data Vertex= Vertex Point3 Edge deriving(Eq)

{-
  implementing simple generative modelling
-}

--- elementar object generation
createQuad :: Point3 ->
  Point3 ->
  Point3 ->
  Point3 ->
  Body

createQuad p0 p1 p2 p3 =
let
{faceFront = Face edge0
;faceBack = Face edge2
;vert0 = Vertex p0 edge0
;edge0 = Edge
 (vert0, vert1)
 (faceFront, faceBack)
 (edge3, edge1)
 (edge1, edge3)
;vert1 = Vertex p1 edge1
;edge1 = Edge
 (vert1, vert2)
 (faceFront, faceBack)
 (edge0, edge2)
 (edge2, edge0)
;vert2 = Vertex p2 edge2
;edge2 = Edge
 (vert2, vert3)
 (faceFront, faceBack)
 (edge1, edge3)
 (edge3, edge1)
;vert3 = Vertex p3 edge3
;edge3 = Edge
 (vert3, vert0)
 (faceFront, faceBack)
 (edge2, edge0)
 (edge0, edge2)
}
in
  Body [vert0, vert1, vert2, vert3] [edge0, edge1, edge2, edge3] 
[faceFront, faceBack]


--- Face, Edge, Vertex: Conversion

getVertsFromFace :: Face -> [Vertex]
getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
where
  process_wing_clockwise (Edge
  (vp, _)
  _
  (_, eWing1)
  _)
  | startEdge == eWing1 =
  [vp]
  | otherwise =
  vp : (process_wing_clockwise eWing1)


--- Misc procedures
printPointList :: [Vertex] -> IO ()
printPointList [] = return ()
printPointList ((Vertex p _ ):vs) = putStr (show p) >>
printPointList vs

--- main
main =
do
  let q = createQuad (0,0,0) (0,1,0) (1,1,0) (1,0,0)
  let f = (\(Body _ _ (face:faces)) -> face) q
  let verts = getVertsFromFace f
  printPointList verts
-
 End  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Jules Bean

> {-# LANGUAGE ScopedTypeVariables #-}

Data.Dynamic gives a passable impression of adding support for
dynamically typed code and runtime typing to GHC, without changing
the basic statically typed, all types known at runtime nature of the
language.

Note that Data.Dynamic relies upon two things: it relies upon a
concrete representation of types, given by TypeRep, and a primitive
which has to be provided by the compiler to actually implement
fromDynamic. (In GHC it uses unsafeCoerce# which is already
available, but you could imagine providing other primitives).

In principle TypeReps could be derived by hand, although if you do so
you can break everything by providing invalid instances. In practice
we'd rather the compiler did it for us and guaranteed safety.

You can do all sorts of things with Dynamic, but the general pattern
is that data which has some fixed, known type, can be passed through
a chunk of code which doesn't know its type (wrapped in Dynamic) and
then eventually consumed by another piece of code which *does* know
the type, and can unwrap it. The consuming code has to know the type
to unwrap it, although it can 'guess' various alternatives if it
wants, and thus type safety is preserved.

One thing which you can't obviously do is write Read or Show instances
for Dynamic. So can we pass Dynamic data over the wire?  If not,
Dynamic is limited to the context of "within a single program", and
can't be used over the network between cooperating programs, or in
file formats, etc.

You can try this:

> import Data.Typeable

> data SerialisedDynamic = SD TypeRep String deriving (Show)

> freeze :: (Show a, Typeable a) => a -> SerialisedDynamic
> freeze x = SD (typeOf x) (show x)

> thaw :: forall a . (Read a, Typeable a) => SerialisedDynamic -> Maybe a
> thaw (SD t s) = if typeOf (undefined :: a) == t then
>Just (read s)
> else Nothing

This is close, and works as far as it goes. It is a limited
reimplementation of Dynamic which uses show/read instead of
unsafeCoerce#. As such it is pure haskell (but relies on Typeable
instances).

You can't finish it off because you can't derive a 'Read' instance for
SD, because there is no read instance for TypeRep. Off-hand I can't
think of any reason why there can't be a Read instance for TypeRep,
but it would be a bit tricky with the current TypeRep because of the
way its implemented, I think. You need to take care about globally
qualified types and might want to use package names like ghc does in
its linking phase, but those are definitely surmountable problems.

Having said all that, I'm not sure how useful this really is. Most of
the time you could use this, you could equally just pass around the
String and 'read' it once you get to the place where you want to use
the value. Practical over-the-wire protocols necessarily have some
kind of tagging mechanism, and all this adds is a global "tag table"
for Typeable types via TypeRep.

Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] hsc2hs: expanding macros in the .hsc file

2008-05-13 Thread Olivier Boudry
Hi all,

Is it possible to expand macros defined in includes into the .hsc file? I'm
trying to call functions from a library written in C. The library can be
used with or without Unicode chars, depending on #define instructions.

The library has macros for all the standard functions used to work on
strings (malloc, strlen, strcpy, strtok, ...). So that in code using the
library one can always call the macro and don't care about the char type
(strlenU instead of wcslen or strlen). Macro expansion will define which
function will be used depending on the SAPwithUNICODE #define. I put a very
simplified example below. The original file contains much more conditions
(OS, Unicode or not, little or big endian, ...).

#define strlenU SAP_UNAME(strlen)

#ifdef SAPwithUNICODE
  #define SAP_UNAME(name)name##U16
#else
  #define SAP_UNAME(name)name
#endif

#if defined(WCHAR_is_2B)
  #define strlenU16   wcslen
#endif

I would like to be able to expand strlenU in my .hsc file and get the
correct function.

foreign import ccall "static sapuc.h strlenU"
  f_strlenU :: Ptr (#type SAP_UC) -> IO (#type size_t)

I would like to be able to expand the strlenU macro to the real function
name (wcslen or strlen). Is there a way to do this?

Thanks,

Olivier.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Bulat Ziganshin
Hello Jules,

Tuesday, May 13, 2008, 9:39:12 PM, you wrote:
> This is close, and works as far as it goes. It is a limited
> reimplementation of Dynamic which uses show/read instead of

there are gread/gshow funcs. don't know how these works, though :)


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Instance of Eq on cyclic data structure?

2008-05-13 Thread Lennart Augustsson
Using IDs is the only way.  Haskell has no concept of circular data
structures.  An implementation might use them internally, but there is no
such language concept.  So what you have are infinitely unfolding trees and
they cannot be compared for equality.

  -- Lennart

On Tue, May 13, 2008 at 5:29 PM, Leonard Siebeneicher <[EMAIL PROTECTED]>
wrote:

> Hi.
>
> I am testing a winged edge data structure.
>
>  module Main where
>>
>> type Point3 = (Float, Float, Float)
>>
>> data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
>> data Face= Face Edge deriving(Eq)
>> data Edge= Edge  (Vertex, Vertex)
>>(Face, Face)
>>(Edge, Edge)
>>(Edge, Edge) deriving(Eq)
>> data Vertex= Vertex Point3 Edge deriving(Eq)
>>
> ...
>
>> --- Face, Edge, Vertex: Conversion
>>
>> getVertsFromFace :: Face -> [Vertex]
>> getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
>>where
>>  process_wing_clockwise (Edge
>>  (vp, _)
>>  _
>>  (_, eWing1)
>>  _)
>>  | startEdge == eWing1 =
>>  [vp]
>>  | otherwise =
>>  vp : (process_wing_clockwise eWing1)
>>
>>
>> --- Misc procedures
>> printPointList :: [Vertex] -> IO ()
>> printPointList [] = return ()
>> printPointList ((Vertex p _ ):vs) = putStr (show p) >>
>>printPointList vs
>>
>> --- main
>> main =
>>do
>>  let q = createQuad (0,0,0) (0,1,0) (1,1,0) (1,0,0)
>>  let f = (\(Body _ _ (face:faces)) -> face) q
>>  let verts = getVertsFromFace f
>>  printPointList verts
>>
>>
>>  Here I get a error message.
>
>  (0.0,0.0,0.0)(0.0,1.0,0.0)(1.0,1.0,0.0)runhugs: Error occurred
>>
>> ERROR - Control stack overflow
>>
>>  I think this come because the derived Eq-Instance "==" get into an
> infinite loop
>
> Alternatively I tried following
>
>> --- We want to identify each Body, Face, Edge or Vertex by a number ID
>> type ID = Int  --- A special number for identifying, used for Eq
>>
>> data Body = Body ID [Vertex] [Edge] [Face]
>> data Face = Face ID (Edge)
>> data Edge = Edge ID
>>(Vertex , Vertex)
>>(Face , Face)
>>(Edge , Edge)
>>(Edge , Edge)
>> data Vertex = Vertex ID Edge
>>
>> --- Eq Instances
>> instance Eq Vertex) where
>>(Vertex i1 _) == (Vertex i2 _) = i1 == i2
>>
>> instance Eq Face where
>>(Face i1 _ ) == (Face i2 _) = i1 == i2
>>
>> instance Eq Edge where
>>(Edge i1 _ _ _ _) == (Edge i2 _ _ _ _) = i1 == i2
>>
>>
>> instance Eq (Body) where
>>(Body i1  _ _ _) == (Body i2  _ _ _) = i1 == i2
>>
> ...
>
> This way my code does not hang up. But I have to manage those ugly ID's.
> Is there a better way to create instances of class Eq, without something
> like ID?
>
> Thanks for reading.
>
> Best regards, Leonard
>
>
>
>>
>> 
>> 
>>  Begin 
>> 
>> module Main where
>>
>> type Point3 = (Float, Float, Float)
>>
>> data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
>> data Face= Face Edge deriving(Eq)
>> data Edge= Edge (Vertex, Vertex)
>>(Face, Face)
>>(Edge, Edge)
>>(Edge, Edge) deriving(Eq)
>> data Vertex= Vertex Point3 Edge deriving(Eq)
>>
>> {-
>>  implementing simple generative modelling
>> -}
>>
>> --- elementar object generation
>> createQuad :: Point3 ->
>>  Point3 ->
>>  Point3 ->
>>  Point3 ->
>>  Body
>>
>> createQuad p0 p1 p2 p3 =
>>let
>>{faceFront = Face edge0
>>;faceBack = Face edge2
>>;vert0 = Vertex p0 edge0
>>;edge0 = Edge
>> (vert0, vert1)
>> (faceFront, faceBack)
>> (edge3, edge1)
>> (edge1, edge3)
>>;vert1 = Vertex p1 edge1
>>;edge1 = Edge
>> (vert1, vert2)
>> (faceFront, faceBack)
>> (edge0, edge2)
>> (edge2, edge0)
>>;vert2 = Vertex p2 edge2
>>;edge2 = Edge
>> (vert2, vert3)
>> (faceFront, faceBack)
>> (edge1, edge3)
>> (edge3, edge1)
>>;vert3 = Vertex p3 edge3
>>;edge3 = Edge
>> (vert3, vert0)
>> (faceFront, faceBack)
>> (edge2, edge0)
>> (edge0, edge2)
>>}
>>in
>>  Body [vert0, vert1, vert2, vert3] [edge0, edge1, edge2, edge3]
>> [faceFront, faceBack]
>>
>> --- Face, Edge, Vertex: Conversion
>>
>> getVertsFromFace :: Face -> [Vertex]
>> getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
>>where
>>  process_wing_clockwise (Edge
>>  (vp, _)
>>  _
>>  (_, eWing1)
>>  _)
>>  | startEdge == eWing1 =
>>  [vp]
>>  | otherwise =
>>  vp : (process

Re[2]: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Bulat Ziganshin
Hello Richard,

Tuesday, May 13, 2008, 8:23:59 PM, you wrote:

> Yes it was just a plausible guess, but not contradicted by the experts.
> And that was using the Windows version of GHC so other versions may
> have better optimisation. I don't know how to check, maybe the experts
> can illuminate the subject?

note that my letter was not contradicted too LOL

1. many experts just don't read this list due to its high traffic,
write into ghc-users instead

2. ghc is too large system and no one know all its details. this
particular option may be set up 10 years ago and never
touched/studied by anyone since then. for example, i'm ghc performance
expert [to some degree], but i don't know anything about its build
system. all that i can tell is that ghc base library build times don't
prohibit use of -O2


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trying to avoid duplicate instances

2008-05-13 Thread Bulat Ziganshin
Hello Eric,

Tuesday, May 13, 2008, 10:16:48 PM, you wrote:

> -fallow-overlapping-instances doesn't convince GHC.  Is there a way
> around this other than manually writing out all the instances I want?

afaik, no. typeclasses are not the same as OOP classes. i suggest you
to look into http://haskell.org/haskellwiki/OOP_vs_type_classes and in
particular papers mentioned at the end - one describes implementation
of type classes and other emphasize differences between t.c. and
classes. these should shed the light :)


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Trying to avoid duplicate instances

2008-05-13 Thread Eric Stansifer
I am using a bunch of empty type classes to categorize some objects:

> class FiniteSolidObject o
> class FinitePatchObject o
> class InfiniteSolidObject o
...
> instance FiniteSolidObject Box
> instance FiniteSolidObject Sphere
> instance FinitePatchObject Mesh
> instance InfiniteSolidObject Plane
...

and would like to write a function elsewhere with a type signature like:
> intersection :: (SolidObject o1, SolidObject o2) => o1 -> o2 -> Intersection 
> o1 o2

These classes are not exposed to the user (which will be me, but
that's besides the point) so the the user (me) does not accidentally
try to intersect two meshes, etc.  So long as I define my instances
correctly the compiler will verify this at compile time.

Since "solid objects" are exactly "finite solid objects" plus
"infinite solid objects", there is an obvious way to code up this
logical relationship.  So I try to write:
> class SolidObject o
> instance FiniteSolidObject o => SolidObject o
> instance InfiniteSolidObject o => SolidObject o

but inevitably GHC complains that I have duplicate instance
declarations.  I had mistakenly thought that GHC would only give such
an error if it could exhibit a specific type 'o' which satisfied both
the context 'FiniteSoildObject o' and the context 'InfiniteSolidObject
o' (and I know that there does not exist any such 'o' because I define
my instances such that 'FiniteSolidObject' and 'InfiniteSolidObject'
will never coincide, and I do not export any of these type classes),
but I see now that this is not the case.  Adding flags like
-fallow-overlapping-instances doesn't convince GHC.  Is there a way
around this other than manually writing out all the instances I want?
That is,

> instance FiniteSolidObject Box
> instance FiniteObject Box
> instance SolidObject Box
> instance UnionableObject Box
> instance Object Box
>
> instance FiniteSolidObject Sphere
...

Thanks,
Eric
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Don Stewart
tanielsen:
> Hello,
> 
> I would like to use a lazy, purely functional language to create an
> experiement description (and execution!) language for cellular
> neuroscience, i.e. electrical recordings and stimulation.
> Unfortunately, this means I have to talk to a
> Analog-to-Digital/Digital-to-Analog converter board, with a precision
> of about 16 bit at a rate of 50 kHz. I currently have a National
> Instruments M-series board, but would be happy to try another board.
> I'm not looking for real-time control right now, but that might be of
> interest in the future.
> 
> Has anyone looked at creating bindings to the NI-DAQmx drivers or the
> open-source COMEDI project, or similar hardware? Would anyone be
> interested in helping out with this driver binding?

I'm assuming there are existing C libraries for this? So a Haskell
binding would just talk to these?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Endianess (was Re: GHC predictability)

2008-05-13 Thread Aaron Denney
On 2008-05-12, Andrew Coppin <[EMAIL PROTECTED]> wrote:
> (Stupid little-endian nonsense... mutter mutter...)

I used to be a big-endian advocate, on the principle that it doesn't
really matter, and it was standard network byte order.  Now I'm
convinced that little endian is the way to go, as bit number n should
have value 2^n, byte number n should have value 256^n, and so forth.

Yes, in human to human communication there is value in having the most
significant bit first.  Not really true for computer-to-computer
communication.

-- 
Aaron Denney
-><-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Ketil Malde
Aaron Denney <[EMAIL PROTECTED]> writes:

> I used to be a big-endian advocate, on the principle that it doesn't
> really matter, and it was standard network byte order.  Now I'm
> convinced that little endian is the way to go

I guess it depends a lot on what you grew up with.  The names
(little/big endian) are incredibly apt.

The only argument I can come up with, is that big endian seems to make
more sense for 'od':

  % echo foobar > foo 
  % od -x foo
  000 6f66 626f 7261 000a
  007

Since this is little endian, the output corresponds to "of bo ra
\0\n".

So I guess the argument is that for big-endian, the concatenation of
hex numbers is invariant with respect to word sizes?

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Paul Johnson

Jeff Polakow wrote:

[...] This can be easily fixed by defining a suitable strict sum:

sum' = foldl' (+) 0

and now sum' has constant space. We could try to redefine mean using 
sum':


mean1 xs = sum' xs / fromIntegral (length xs)

but this still gobbles up memory. The reason is that xs is used twice 
and cannot be discarded as it is generated. 
As an experiment I tried using "pointfree" to see if it would do 
something similar.


> $ pointfree "\xs -> foldl' (+) 0 xs / fromIntegral (length xs)"
> ap ((/) . foldl' (+) 0) (fromIntegral . length)

But when I try this in GHCi 6.8.2 I get:

> Prelude Data.List Control.Monad> let mean2 = ap ((/) . foldl' (+) 0) 
(fromIntegral . length)

>
> :1:12:
>No instance for (Monad ((->) [b]))
>   arising from a use of `ap' at :1:12-58
> Possible fix: add an instance declaration for (Monad ((->) [b]))
>In the expression: ap ((/) . foldl' (+) 0) (fromIntegral . length)
>In the definition of `mean2':
>mean2 = ap ((/) . foldl' (+) 0) (fromIntegral . length)


Any ideas?  Would the auto-generated pointfree version be any better if 
it could be made to work?


Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Tom Nielsen
Yes. I guess I have to wait for chapter 19, then?

Tom

On Tue, May 13, 2008 at 7:35 PM, Don Stewart <[EMAIL PROTECTED]> wrote:
> tanielsen:
>
>
> > Hello,
>  >
>  > I would like to use a lazy, purely functional language to create an
>  > experiement description (and execution!) language for cellular
>  > neuroscience, i.e. electrical recordings and stimulation.
>  > Unfortunately, this means I have to talk to a
>  > Analog-to-Digital/Digital-to-Analog converter board, with a precision
>  > of about 16 bit at a rate of 50 kHz. I currently have a National
>  > Instruments M-series board, but would be happy to try another board.
>  > I'm not looking for real-time control right now, but that might be of
>  > interest in the future.
>  >
>  > Has anyone looked at creating bindings to the NI-DAQmx drivers or the
>  > open-source COMEDI project, or similar hardware? Would anyone be
>  > interested in helping out with this driver binding?
>
>  I'm assuming there are existing C libraries for this? So a Haskell
>  binding would just talk to these?
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neil Mitchell
Hi

>  Anyway, the log file that i'm parsing uses English grammar, and the
> performance really dropped just by upgrading to Parsec3. I was hoping to use
> the ByteString support to boost the speed of already slow code, but had no
> such luck. It basicly went from "Ugh, this is kinda slow" to "U i'm
> gonna go grab a burger and let this melt my CPU" haha.

I think it is known that Parsec 3 is slower than Parsec 2, as a result
of the increased generality. I know that in the past someone was
working on it, but I am not sure if they ever got anywhere.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using Template Haskell to make type-safe database access

2008-05-13 Thread Bjorn Bringert
On Thu, May 8, 2008 at 5:32 PM, Mads Lindstrøm <[EMAIL PROTECTED]> wrote:
> Hi Wouter,
>
>  Wouter Swierstra wrote:
>
>
> > Nice! I have to admit, it's much nicer than I expected it to be. Just
>  > out of curiousity, what happens when you write:
>  >
>  > selectTupleList :: Connection -> IO [Integer]
>  >
>  > instead of
>  >
>  > selectTupleList :: Connection -> IO [(Integer, String, String)]
>  >
>  > What kind of error message do you get? More specifically, is this
>  > error caught statically or dynamically.
>
>  The type annotation in UseSqlExpr.hs was just for the reader. The
>  compiler can infer the types completely. Thus when I make the suggested
>  change I get a compile time error. It looks like this:
>
>  UseSqlExpr.hs:27:6:
> Couldn't match expected type `Integer'
>against inferred type `(Integer, String, String)'
>   Expected type: IO [Integer]
>   Inferred type: IO [(Integer, String, String)]
> In the expression:
> (return
>$ (map
> (\ [x0[a2ZY], x1[a2ZZ], x2[a300]]
>  -> (readInteger x0[a2ZY],
>  readString x1[a2ZZ],
>  readString x2[a300]))
> rows[a2ZX]))
> In the expression:
> do rows[a2ZX] <- fetchRows
>dsn[a2ZW]
>['S', 'E', 'L', 'E', 'C', 'T', ' ', 'u', 's', 'e', 
> 'r', '_', 'i',
> 'd', ',', ' ', 'u', 's', 'e', 'r', '_', 'n', 'a', 
> 'm', 'e', ',',
> ' ', 'u', 's', 'e', 'r', '_', 'r', 'e', 'a', 'l', 
> '_', 'n', 'a',
> 'm', 'e', ' ', 'F', 'R', 'O', 'M', ' ', 'u', 's', 
> 'e', 'r', ';']
>(return
>   $ (map
>(\ [x0[a2ZY], x1[a2ZZ], x2[a300]]
> -> (readInteger x0[a2ZY],
> readString x1[a2ZZ],
> readString x2[a300]))
>rows[a2ZX]))
>  make: *** [all] Fejl 1
>
>
>  >
>  > The only other limitation I can think of, would be in the situation
>  > where you don't have compile-time access to the database, e.g.
>  > developing software for a client with a database that can only be
>  > accessed from their intranet. I have no idea how much of a limitation
>  > that is.
>
>  True, but this limitation is only relevant when you do not have access
>  to the production database or a database with identical metadata. How
>  often do people develop like that? How are they testing? I have a hard
>  time picturing a setup without a test database with identical metadata
>  to the production database.
>
>
>  > >> Perhaps I should explain my own thoughts on the subject a bit better.
>  > >> I got interested in this problem because I think it makes a nice
>  > >> example of dependent types "in the real world" - you really want to
>  > >
>  > > But won't you end up implementing all the functionality of an SQL
>  > > parser? While possible, it does seem like a huge job. With a TH
>  > > solution
>  > > you will safe a lot of work.
>  >
>  > Yes - but parsing the result of an SQL describe statement is pretty
>  > easy.
>  ok.
>
>
>  >
>  > > A library that
>  > > will be a lot more complex to learn than what I am proposing (assuming
>  > > the developer already knows SQL).
>  >
>  > Hmm. This is a rather sticky point. One might also argue that Haskell
>  > developers have to learn SQL to use the solution you propose. I'm not
>  > particularly convinced. Both approaches have their merits I think.
>
>  Yes. I was _not_ making what you could call a strong argument. I was
>  assuming that most (Haskell) developers knew SQL anyway. I have no data
>  to back it up. Just my gut feeling.
>
>  To be fair I should mention a couple of drawbacks with the TH-based
>  approach. While SQL got static typing, it is not really as powerful as
>  it could be. For example if you do "select sum(...) from ..." the type
>  system will tell you that a set of values are returned. In reality this
>  set will never have more than one member. Your proposal would be able to
>  return a Float in stead of a [Float].
>
>  Another advantage your proposal (and disadvantage of the TH based one)
>  would be that it can abstract over variances in different database
>  implementation. That is, you could translate to SQL depending on SQL
>  backend. This would be really nice. But I guess it would also be a big
>  task.
>
>
>  >
>  > Anyhow - nice work! Have you asked Bjorn Bringert what he thinks? He's
>  > a really clever and approachable guy - and he knows a lot more about
>  > interfacing with databases than I do.

I guess I'll just have to reply then :-)

Mads: Preparing the statement and asking the DB about the type at
compile is a great idea! I've never thought of that. Please consider
completing this and packaging it as a library.

I can't really see any major problems with this approach, other than
the obivious "changing schema" problem that it shares with

Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Alfonso Acosta
On Tue, May 13, 2008 at 7:39 PM, Jules Bean <[EMAIL PROTECTED]> wrote:
>  One thing which you can't obviously do is write Read or Show instances
>  for Dynamic. So can we pass Dynamic data over the wire?  If not,
>  Dynamic is limited to the context of "within a single program", and
>  can't be used over the network between cooperating programs, or in
>  file formats, etc.

I've never used hs-plugins, but if I recall properly, it includes its
own implementation of TypeRep (and consequently Dynamic) in order to
overcome the serialization problem you have mentioned.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Ketil Malde
"Neil Mitchell" <[EMAIL PROTECTED]> writes:

> I think it is known that Parsec 3 is slower than Parsec 2, as a result
> of the increased generality. I know that in the past someone was
> working on it, but I am not sure if they ever got anywhere.

I got pretty good performance (IMHO - about 10MB/s, still CPU-bound)
using a lazy bytestring tokenizer and Parsec on top of that.  Of
course, it probably depends on the complexity of the parsing...

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Jed Brown
On Tue 2008-05-13 20:46, Ketil Malde wrote:
> Aaron Denney <[EMAIL PROTECTED]> writes:
> 
> I guess it depends a lot on what you grew up with.  The names
> (little/big endian) are incredibly apt.
> 
> The only argument I can come up with, is that big endian seems to make
> more sense for 'od':
> 
>   % echo foobar > foo 
>   % od -x foo
>   000 6f66 626f 7261 000a
>   007

This, of course, is because `od -x' regards the input as 16-bit integers.  We
can get saner output if we regard it is 8-bit integers.

  $ od -t x1 foo
  000 66 6f 6f 62 61 72 0a
  007

> > Now I'm convinced that little endian is the way to go, as bit number n
> > should have value 2^n, byte number n should have value 256^n, and so forth.

It's not that simple with bits.  They lack consistency just like the usual US
date format and the way Germans read numbers.

Jed


pgphk5bR3rQBd.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Ketil Malde
Jed Brown <[EMAIL PROTECTED]> writes:

> This, of course, is because `od -x' regards the input as 16-bit integers.  We
> can get saner output if we regard it is 8-bit integers.

Yes, of course. The point was that for big-endian, the word size
won't matter.  Little-endian words will be reversed with respect to
the normal (left-to-right, most significant first) way we print
numbers.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Anton van Straaten

Achim Schneider wrote:

To get a bit more on-topic: I currently completely fail to implement a
layout rule in Parsec because I don't understand its inner workings,
and thus constantly mess up my state. Parsec's ease of usage is
deceiving as soon as you use more than combinators: Suddenly the
plumbing becomes important, and hackage is full of such things. Haskell
has potentially infinite learning curves, and each one of them
usually represents a wall. To make them crumble, you have to get used to
not understand anything until you understand everything.


A big component of this is just that a high level of abstraction is 
involved.  Something similar occurs in other languages, for programs 
that are written in a very abstract way.  Some frameworks in e.g. 
Smalltalk, Java, or C++ are an example of this: full of classes whose 
domain is mainly internal to the framework, and you have to understand 
the framework's design principles in their full generality in order to 
be able to really understand the code.


As a more concrete example related to Parsec, consider a generator of 
table-driven parsers written in C, and compare this to writing a 
recursive-descent parser directly.  The code for the parser generator is 
completely impenetrable for someone who isn't familiar with the theory 
behind it, so if they want to change the generator's behavior, they're 
likely to be stuck.  Whereas for a recursive descent parser for a single 
language, it's much easier to map between the ultimate application 
goals, and how those are accomplished in the code, without much special 
knowledge.


Of course there are pros and cons on either side.  One reason that DSLs 
work well is that when done right, so that abstraction leakage is 
minimal, they can insulate users from having to understand the 
underlying system.  Embedded DSLs, like Parsec, seem more likely to 
suffer from problems in this area, although in that case the tradeoff is 
that you're getting to use them directly in a general-purpose language.


Anton

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Endianess

2008-05-13 Thread Aaron Denney
On 2008-05-13, Jed Brown <[EMAIL PROTECTED]> wrote:
>> > Now I'm convinced that little endian is the way to go, as bit number n
>> > should have value 2^n, byte number n should have value 256^n, and so forth.
>
> It's not that simple with bits.  They lack consistency just like the
> usual US date format and the way Germans read numbers.

Yes.  I'm saying what should be, not what is.  I'm saying one of those
ways is wrong, wrong, wrong.  It usually doesn't matter in practice,
because writes to e.g. RAM effectively happen at byte-level or higher,
making the internal labels fairly arbitrary.  It matters and can cause
confusion in actual serial protocols, of course, which have been making
a resurgence in recent years, though again, the bit order in these are
well understood.  Just possibly wrong.

-- 
Aaron Denney
-><-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Luke Palmer
On Tue, May 13, 2008 at 12:48 PM, Paul Johnson <[EMAIL PROTECTED]> wrote:
>  > $ pointfree "\xs -> foldl' (+) 0 xs / fromIntegral (length xs)"
>  > ap ((/) . foldl' (+) 0) (fromIntegral . length)
>
>  But when I try this in GHCi 6.8.2 I get:
>
>  > Prelude Data.List Control.Monad> let mean2 = ap ((/) . foldl' (+) 0)
> (fromIntegral . length)
>  >
>  > :1:12:
>  >No instance for (Monad ((->) [b]))
>  >   arising from a use of `ap' at :1:12-58
>  > Possible fix: add an instance declaration for (Monad ((->) [b]))
>  >In the expression: ap ((/) . foldl' (+) 0) (fromIntegral . length)
>  >In the definition of `mean2':
>  >mean2 = ap ((/) . foldl' (+) 0) (fromIntegral . length)

It's using the Monad ((->) r) instance, which doesn't exist by
default.  import Control.Monad.Instances to get it.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Bryan O'Sullivan
Darrin Thompson wrote:

> These "tricks" going into Real World Haskell?

Some will, yes.

For example, the natural and naive way to write Andrew's "mean" function
doesn't involve tuples at all: simply tail recurse with two accumulator
parameters, and compute the mean at the end.  GHC's strictness analyser
does the right thing with this, so there's no need for seq, $!, or the
like.  It's about 3 lines of code.

http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Endianess

2008-05-13 Thread Daniel Fischer
Am Dienstag, 13. Mai 2008 21:28 schrieb Aaron Denney:
> On 2008-05-13, Ketil Malde <[EMAIL PROTECTED]> wrote:
> > Jed Brown <[EMAIL PROTECTED]> writes:
> >> This, of course, is because `od -x' regards the input as 16-bit
> >> integers.  We can get saner output if we regard it is 8-bit integers.
> >
> > Yes, of course. The point was that for big-endian, the word size
> > won't matter.  Little-endian words will be reversed with respect to
> > the normal (left-to-right, most significant first) way we print
> > numbers.
>
> Right.  Because we print numbers backwards.

Try hebrew or arab then, they have the least significant digit first in 
reading order :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Jeff Polakow
Hello,

> For example, the natural and naive way to write Andrew's "mean" function
> doesn't involve tuples at all: simply tail recurse with two accumulator
> parameters, and compute the mean at the end.  GHC's strictness analyser
> does the right thing with this, so there's no need for seq, $!, or the
> like.  It's about 3 lines of code.
> 
Is this the code you mean?

meanNat = go 0 0 where
go s n [] = s / n
go s n (x:xs) = go (s+x) (n+1) xs

If so, bang patterns are still required bang patterns in ghc-6.8.2 to run 
in constant memory:

meanNat = go 0 0 where
go s n [] = s / n
go !s !n (x:xs) = go (s+x) (n+1) xs

Is there some other way to write it so that ghc will essentially insert 
the bangs for me?

-Jeff



---

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Endianess

2008-05-13 Thread Aaron Denney
On 2008-05-13, Ketil Malde <[EMAIL PROTECTED]> wrote:
> Jed Brown <[EMAIL PROTECTED]> writes:
>
>> This, of course, is because `od -x' regards the input as 16-bit integers.  We
>> can get saner output if we regard it is 8-bit integers.
>
> Yes, of course. The point was that for big-endian, the word size
> won't matter.  Little-endian words will be reversed with respect to
> the normal (left-to-right, most significant first) way we print
> numbers.

Right.  Because we print numbers backwards.

-- 
Aaron Denney
-><-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Jeff Polakow
Hello,

>  > $ pointfree "\xs -> foldl' (+) 0 xs / fromIntegral (length xs)"
>  > ap ((/) . foldl' (+) 0) (fromIntegral . length)
> 
This will have the same space usage as the pointed version. You can see 
this by looking at the monad instance for ((->) r):

instance Monad ((->) r) where
return = const
f >>= k = \ r -> k (f r) r

-Jeff


---

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Endianess

2008-05-13 Thread Achim Schneider
Jed Brown <[EMAIL PROTECTED]> wrote:

> It's not that simple with bits.  They lack consistency just like the
> usual US date format and the way Germans read numbers.
> 
So you claim that you pronounce 14 tenty-four? In German pronunciation
is completely uniform from 13 to 99.


-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Dan Doel
On Tuesday 13 May 2008, Jeff Polakow wrote:
> Is this the code you mean?
>
> meanNat = go 0 0 where
> go s n [] = s / n
> go s n (x:xs) = go (s+x) (n+1) xs
>
> If so, bang patterns are still required bang patterns in ghc-6.8.2 to run
> in constant memory:
>
> meanNat = go 0 0 where
> go s n [] = s / n
> go !s !n (x:xs) = go (s+x) (n+1) xs
>
> Is there some other way to write it so that ghc will essentially insert
> the bangs for me?

It works fine here when compiled with -O or better.

Perhaps that should be a tip in the book? Make sure you're compiling with 
optimizations. :)

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Don Stewart
jeff.polakow:
>Hello,
> 
>> For example, the natural and naive way to write Andrew's "mean" function
>> doesn't involve tuples at all: simply tail recurse with two accumulator
>> parameters, and compute the mean at the end.  GHC's strictness analyser
>> does the right thing with this, so there's no need for seq, $!, or the
>> like.  It's about 3 lines of code.
>>
>Is this the code you mean?
> 
>meanNat = go 0 0 where
>go s n [] = s / n
>go s n (x:xs) = go (s+x) (n+1) xs
>If so, bang patterns are still required bang patterns in ghc-6.8.2 to run
>in constant memory:
> 
>meanNat = go 0 0 where
>go s n [] = s / n
>go !s !n (x:xs) = go (s+x) (n+1) xs
> 
>Is there some other way to write it so that ghc will essentially insert
>the bangs for me?

Yes, give a type annotation, constraining 'n' to Int.

meanNat :: [Double] -> Double
meanNat = go 0 0
  where
   go :: Double -> Int -> [Double] -> Double
   go s n [] = s / fromIntegral n
   go s n (x:xs) = go (s+x) (n+1) xs

And you get this loop:

M.$wgo :: Double#
  -> Int#
  -> [Double]
  -> Double#

M.$wgo =
  \ (ww_smN :: Double#)
(ww1_smR :: Int#)
(w_smT :: [Double]) ->
case w_smT of wild_B1 {
  [] -> /## ww_smN (int2Double# ww1_smR);
  : x_a9k xs_a9l ->
case x_a9k of wild1_am7 { D# y_am9 ->
M.$wgo (+## ww_smN y_am9) (+# ww1_smR 1) xs_a9l
}
}

Without the annotation you get:

M.$wgo :: Double#
  -> Integer
  -> [Double]
  -> Double

GHC sees through the strictness of I#.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Andrew Coppin

Jeff Polakow wrote:


Then, I immediately blow my stack if I try something like:

mean [1..10].

The culprit is actually sum which is defined in the base libraries as 
either a foldl or a direct recursion depending on a compiler flag. In 
either case, the code is not strict enough; just trying to compute:


 sum [1..1000]

blows the stack. This can be easily fixed by defining a suitable 
strict sum:


sum' = foldl' (+) 0

and now sum' has constant space.


OK *now* I'm worried... I thought sum was _already_ defined this way? o_O

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Don Stewart
dons:
> jeff.polakow:
> >Hello,
> > 
> >> For example, the natural and naive way to write Andrew's "mean" 
> > function
> >> doesn't involve tuples at all: simply tail recurse with two accumulator
> >> parameters, and compute the mean at the end.  GHC's strictness analyser
> >> does the right thing with this, so there's no need for seq, $!, or the
> >> like.  It's about 3 lines of code.
> >>
> >Is this the code you mean?
> > 
> >meanNat = go 0 0 where
> >go s n [] = s / n
> >go s n (x:xs) = go (s+x) (n+1) xs
> >If so, bang patterns are still required bang patterns in ghc-6.8.2 to run
> >in constant memory:
> > 
> >meanNat = go 0 0 where
> >go s n [] = s / n
> >go !s !n (x:xs) = go (s+x) (n+1) xs
> > 
> >Is there some other way to write it so that ghc will essentially insert
> >the bangs for me?
> 
> Yes, give a type annotation, constraining 'n' to Int.
> 
> meanNat :: [Double] -> Double
> meanNat = go 0 0
>   where
>go :: Double -> Int -> [Double] -> Double
>go s n [] = s / fromIntegral n
>go s n (x:xs) = go (s+x) (n+1) xs
> 
> And you get this loop:
> 
> M.$wgo :: Double#
>   -> Int#
>   -> [Double]
>   -> Double#
> 
> M.$wgo =
>   \ (ww_smN :: Double#)
> (ww1_smR :: Int#)
> (w_smT :: [Double]) ->
> case w_smT of wild_B1 {
>   [] -> /## ww_smN (int2Double# ww1_smR);
>   : x_a9k xs_a9l ->
> case x_a9k of wild1_am7 { D# y_am9 ->
> M.$wgo (+## ww_smN y_am9) (+# ww1_smR 1) xs_a9l
> }
> }
> 

Note this is pretty much identical to the code you get with a foldl' (though
without the unboxed pair return):

import Data.List
import Text.Printf
import Data.Array.Vector

mean :: [Double] -> Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = (n+1 :*: s+a)
(a :*: b) = foldl' k (0 :*: 0) arr :: (Int :*: Double)

main = printf "%f\n" . mean $ [1 .. 1e9]

Note I'm using strict pairs for the accumulator, instead of banging lazy
pairs.

$s$wlgo :: [Double]
-> Double#
-> Int#
-> (# Int, Double #)

$s$wlgo =
  \ (xs1_aMQ :: [Double])
(sc_sYK :: Double#)
(sc1_sYL :: Int#) ->
case xs1_aMQ of wild_aML {
  [] -> (# I# sc1_sYL, D# sc_sYK #);
  : x_aMP xs11_XMX ->
case x_aMP of wild1_aOg { D# y_aOi ->
$s$wlgo xs11_XMX (+## sc_sYK y_aOi) (+# sc1_sYL 1)
}
}

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread David Menendez
2008/5/13 Abhay Parvate <[EMAIL PROTECTED]>:
> Yes, I had always desired that the operator >>= should have been right
> associative for this short cut even when written without the 'do' notation.

You pretty much always want "a >>= b >>= c" to parse as "(a >>= b) >>=
c" and not "a >>= (b >>= c)". In the latter case, the two uses of
(>>=) aren't in the same monad.

You can get desired effect with the Kleisli composition operator
(>=>), which is in recent versions of Control.Monad. Unfortunately, it
has the same precedence as (>>=), so you can't use them together
without parentheses.

Using it, "a >>= b >>= c" can be rewritten "a >>= (b >=> c)" which is
short for "a >>= \x -> b x >>= c".

-- 
Dave Menendez <[EMAIL PROTECTED]>

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Andrew Coppin

Darrin Thompson wrote:

These "tricks" going into Real World Haskell?


Seconded.


When you say someone
needs to get familiar with the "STG paper" it scares me (a beginner)
off a little, an I've been making an effort to approach the papers.


Well, I'm the sort of contrary person who reads random papers like that 
just for the fun of it. But when somebody says something like this, I 
don't think "ooo, that's scary", I think "ooo, somebody really ought to 
sit down and write a more gentle introduction". You really shouldn't 
*need* to know the exact implementation details to get some idea of what 
will perform well and what won't. But obviously you do need some kind of 
high-level understanding of what's going on. The STG paper isn't a good 
way to get that high-level overview.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Don Stewart
andrewcoppin:
> Darrin Thompson wrote:
> >These "tricks" going into Real World Haskell?
> 
> Seconded.
> 
> >When you say someone
> >needs to get familiar with the "STG paper" it scares me (a beginner)
> >off a little, an I've been making an effort to approach the papers.
> 
> Well, I'm the sort of contrary person who reads random papers like that 
> just for the fun of it. But when somebody says something like this, I 
> don't think "ooo, that's scary", I think "ooo, somebody really ought to 
> sit down and write a more gentle introduction". You really shouldn't 
> *need* to know the exact implementation details to get some idea of what 
> will perform well and what won't. But obviously you do need some kind of 
> high-level understanding of what's going on. The STG paper isn't a good 
> way to get that high-level overview.

Andrew, would you say you understand the original problem of why

mean xs = sum xs / fromIntegral (length xs)

was a bad idea now? Or why the left folds were a better solution?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Darrin Thompson
On Tue, May 13, 2008 at 4:30 PM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
>  Well, I'm the sort of contrary person who reads random papers like that
> just for the fun of it. But when somebody says something like this, I don't
> think "ooo, that's scary", I think "ooo, somebody really ought to sit down
> and write a more gentle introduction". You really shouldn't *need* to know
> the exact implementation details to get some idea of what will perform well
> and what won't. But obviously you do need some kind of high-level
> understanding of what's going on. The STG paper isn't a good way to get that
> high-level overview.
>

I don't think anyone would disagree with that. Reflecting on what I
already know, I can optimize python pretty well and the principles are
pretty similar for C. The reason I can't just port that knowledge is
that with GHC I'm in the land of optimizing for cache hits and at the
same time I'm at a really high level of abstraction so I have to have
some mental picture of how the plumbing connects.

I'm hoping that the optimization chapter of RWH covers a lot of
individual techniques. I think the sum of the techniques will shed
light on the compiler internals in a practical way. But then I'm not
the one doing the work.

--
Darrin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Endianess

2008-05-13 Thread Jed Brown
On Tue 2008-05-13 22:14, Achim Schneider wrote:
> Jed Brown <[EMAIL PROTECTED]> wrote:
> 
> > It's not that simple with bits.  They lack consistency just like the
> > usual US date format and the way Germans read numbers.
> > 
> So you claim that you pronounce 14 tenty-four? In German pronunciation
> is completely uniform from 13 to 99.

I would argue that 100n+11 to 100n+19 are special cases in both German and
English, but only 100n+11 to 100n+15 in Spanish.  In any case, the order of the
digits is dependent on where the decimal falls.  If the ordering is pure little
endian (not x86 halfway) or big endian, the bit order is not dependent on the
width of the field.  Converting breaks this nice property.  Convention is to
write numbers in big endian and it would be nice if there were fewer exceptions.
Yet another argument for ISO 8601 dates.  A somewhat dramatic change would be to
put the exponent first in scientific notation.  Alas, this seems unlikely to
happen.

Jed


pgplDGBP5qgGn.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Philippa Cowderoy
On Tue, May 13, 2008 5:53 am, Neal Alexander wrote:
> I can post the full profiling info if anyone really cares.
>

Any info is helpful. It's taking a while to get round to things, but the
more relevant info we have to hand when we do the easier it is to improve
things and the less begging for data we have to do!

-- 
[EMAIL PROTECTED]

I knew I forgot to pack something - thankfully it was my .sig

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Andrew Coppin

Don Stewart wrote:

Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)



But you know why, don't you?
  


What I'm trying to say [and saying very badly] is that Haskell is an 
almost terrifyingly subtle language. Seemingly insignificant chages can 
have drastic consequences. (Ever tried to run a program and find it goes 
into an infinite loop or a deadlock because you accidentally made some 
function slightly stricter than it needs to be?) I can't find it right 
now, but some paper I was reading showed two example programs, one of 
which is dramatically less performant than the other, and yet they look 
like they should be exactly equivilent, "and one might even expect a 
good compiler to 'optimise' one to the other [in the wrong direction]". 
After studying that chapter for about half an hour, I *still* can't wrap 
my brain around what the difference is. But I assume SPJ knows what he's 
talking about.


 sat down and spent the best part of a day writing an MD5 
implementation. Eventually I got it so that all the test vectors work 
right. (Stupid little-endian nonsense... mutter mutter...) When I tried 
it on a file containing more than 1 MB of data... o dear...



Did you use Data.Binary or the other libraries for efficient access to 
gigabytes of data?
  


Data.ByteString for all I/O. The rest is just moving data around one 
byte at a time. Difficult to see how you'd do it any different. [I'll 
check to see if I still have the code, maybe somebody can do something 
to it.]


And we're not talking about "gigabytes" of data - we're talking about 
the program completely choking on a 5 MB file. Even plain String I/O 
couldn't possibly take *minutes* to read 5 MB!



Of course, the first step in any serious attempt at performance improvement
is to actually profile the code to figure out where the time 
is being spent. 



Well, actually, for our 'mean()' case, it means just using the right structures.
Arrays for example:
  


Doesn't that mean all the data has to be present in memory at once?

I would expect [optimistically?] that if you have a definition for 
"mean" that performs only 1 traversal of the list, you'd end up with 
something approximating


 x = 0
 for n = 1 to 1e9
   x = x + n
 return x

If you allocate an array, you're limited to what will fit into RAM at once.


(The compiler is optimising this to:

Main_zdszdwfold_info:
  leaq32(%r12), %rax
  cmpq%r15, %rax
  movq%rax, %r12
  ja  .L10
  movsd   .LC0(%rip), %xmm0
  ucomisd %xmm5, %xmm0
  jae .L12
  movq%rsi, (%rax)
  movq$base_GHCziFloat_Dzh_con_info, -24(%rax)
  movsd   %xmm6, -16(%rax)
  movq$base_GHCziBase_Izh_con_info, -8(%rax)
  leaq-7(%rax), %rbx
  leaq-23(%rax), %rsi
  jmp *(%rbp)
.L12:
  movapd  %xmm6, %xmm0
  addq$1, %rsi
  subq$32, %r12
  addsd   %xmm5, %xmm0
  addsd   .LC2(%rip), %xmm5
  movapd  %xmm0, %xmm6
  jmp Main_zdszdwfold_info

Note even any garbage collection. This should be pretty much the same
performance-wise as unoptimised C.
  


Uh... I can't actually read x86 assembly, but I'll take your word. ;-) I 
guess it was kind of a bad example.


almost any nontrivial program you write 
spends 60% or more of its time doing GC rather than actual work. 



Ok, you're doing something very wrong. GC time is typically less than 15% of 
the running
time of typical work programs I hack on.

I bet you're using lists inappropriately?
  


No offense, but from what I can tell, you're a super-ultra-hyper expert 
in Haskell hacking. It's not surprising that *you* can write fast code. 
What I'm debating is how easy it is for relative beginners to write 
performant code. Certainly I'm not attempting to suggest that Haskell 
programs cannot be fast - this is manifestly false. What I'm contesting 
is how easy it is to make them fast.


I guess because Haskell is such an implicit language, it's very easy to 
end up implicitly doing a hell of a lot of work without realising you're 
doing it. Certainly when I first started with Haskell, a lot of 
non-trivial programs I wrote were *horrifyingly* slow, or ate 
astonishing amounts of RAM. (My favourite one was that time I tried to 
use the "length" function to decide when to switch from quick-sort to 
insert sort. I'll leave you to guess what this did to the performance 
levels...) Today, my programs generally work better - but there's still 
a long, long way from being "fast".



I think there is a problem that few people are taking the time to understand
the compilation model of Haskell, while they've had the C runtime behaviour
drilled into their brains since college.

Until you sit down and understand what your Haskell code means, it will be very
hard to reason about optimisations, unfortunately.
  


You're probably right about all that. I woul

Re[2]: [Haskell-cafe] GHC predictability

2008-05-13 Thread Bulat Ziganshin
Hello Don,

Wednesday, May 14, 2008, 12:34:07 AM, you wrote:

>> high-level understanding of what's going on. The STG paper isn't a good
>> way to get that high-level overview.

> Andrew, would you say you understand the original problem of why

> mean xs = sum xs / fromIntegral (length xs)

> was a bad idea now? Or why the left folds were a better solution?

i think that the problem is just what xs can't be consumed while it
generated because it's used two times and this may be understood just
by learning reduction graph as strategy of Haskell evaluation. isn't?


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Andrew Coppin

Don Stewart wrote:

Andrew, would you say you understand the original problem of why

mean xs = sum xs / fromIntegral (length xs)

was a bad idea now? Or why the left folds were a better solution?
  


That definition of mean is wrong because it traverses the list twice. 
(Curiosity: would traversing it twice in parallel work any better?) As 
for the folds - I always *always* mix up left and right folds. Every 
single damn time I want a fold I have to look it up to see which one I 
want. I had a similar problem with learning to drive, by the way... the 
consequences there are of course much more serious than just crashing 
your _computer_...


It was probably a poor example. The point I was attempting to make is 
that in Haskell, very subtle little things can have an unexpectedly 
profound effect. If you don't know what you're supposed to be looking 
for, it can be really hard to see why your program is performing badly.


For what it's worth, I think I *do* currently have a reasonably gasp of 
how lazzy evaluation works, normal order reduction, graph machines, and 
so on. And yet, I still have trouble making my code go fast sometimes. 
As I said in another post, if I can track down some *specific* programs 
I've written and had problems with, maybe we can have a more meaningful 
debate about it.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Andrew Coppin

Aaron Denney wrote:

On 2008-05-12, Andrew Coppin <[EMAIL PROTECTED]> wrote:
  

(Stupid little-endian nonsense... mutter mutter...)



I used to be a big-endian advocate, on the principle that it doesn't
really matter, and it was standard network byte order.  Now I'm
convinced that little endian is the way to go, as bit number n should
have value 2^n, byte number n should have value 256^n, and so forth.

Yes, in human to human communication there is value in having the most
significant bit first.  Not really true for computer-to-computer
communication.
  


It just annoys me that the number 0x12345678 has to be transmuted into 
0x78563412 just because Intel says so. Why make everything so complicated?


[Oh GOD I hope I didn't just start a Holy War...]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess (was Re: GHC predictability)

2008-05-13 Thread Lennart Augustsson
Also, the way we write numbers is little endian when writing in
Arabic; we just forgot to reverse the digits when we borrowed the
notation.

Little endian is more logical unless you also number your bits with
MSB as bit 0.

On Tue, May 13, 2008 at 7:35 PM, Aaron Denney <[EMAIL PROTECTED]> wrote:
> On 2008-05-12, Andrew Coppin <[EMAIL PROTECTED]> wrote:
>> (Stupid little-endian nonsense... mutter mutter...)
>
> I used to be a big-endian advocate, on the principle that it doesn't
> really matter, and it was standard network byte order.  Now I'm
> convinced that little endian is the way to go, as bit number n should
> have value 2^n, byte number n should have value 256^n, and so forth.
>
> Yes, in human to human communication there is value in having the most
> significant bit first.  Not really true for computer-to-computer
> communication.
>
> --
> Aaron Denney
> -><-
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HSFFIG compilation problem

2008-05-13 Thread Harri Kiiskinen
Hello,

I've tried to compile hsffig-1.0pl2, unsuccesfully, also from the
unstable darcs repo. I'm using ghc-6.8 (ghc6-6.8.2 from Debian
unstable). HSFFIG comes with its own version of Cabal, but I cannot get
past the configuration phase (./cabal-setup configure), when I get the
following output:

Configuring HSFFIG-1.0...
configure: searching for ghc in path.
configure: found ghc at /usr/bin/ghc
configure: looking for package tool: ghc-pkg near compiler
in /usr/bin/ghc
configure: found package tool in /usr/bin/ghc-pkg
configure: Using install prefix: /usr/local
configure: Using compiler: /usr/bin/ghc
configure: Compiler flavor: GHC
configure: Compiler version: 6.8.2
configure: Using package tool: /usr/bin/ghc-pkg
configure: Using haddock: /usr/bin/haddock
configure: Using happy: /usr/bin/happy
configure: Using alex: /usr/bin/alex
configure: Using hsc2hs: /usr/bin/hsc2hs
configure: Using cpphs: /usr/bin/cpphs
configure: Reading installed packages...
configure: Dependency base-any: using base-3.0.1.0
cannot satisfy dependency text-any

Does anyone have an idea, how the dependency 'text-any' could be
satisfied?

Harri K.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maybe a, The Rationale

2008-05-13 Thread PR Stanley



Paul:   What is the underlying rationale for the Maybe data type?

It is the equivalent of a database field that can be NULL.


Paul: shock, horror! the null value or the absence of any 
value denoted by null is not really in harmony with the relational model.




> is it the safe style of programming it encourages/

Yes.  Consider C, where this is typically done with a NULL pointer, or
Lisp, where you use the empty list, nil.  These are perfectly good
values in themselves, while Nothing is just Nothing, if you pardon the
pun.

-k
--
If I haven't seen further, it is by standing in the footprints of giants


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Achim Schneider
Andrew Coppin <[EMAIL PROTECTED]> wrote:

> You're probably right about all that. I would humbly suggest that
> what is somewhat lacking is a good, introductory, high-level text on
> what makes Haskell go fast and what makes it go slow. As with many
> things in the Haskell world, there are bits and pieces of information
> out there, but it's difficult to track them all down and present a
> coherant picture. We've got a section on the wiki containing scraps
> and snippets of information. There are various GHC-related papers [if
> you can find them online]. GHC has various profiling possibilities,
> but thus far I've found it difficult to digest the results. We need a
> good, thorough body of text on this subject, I feel. [Of course, that
> still means somebody has to write one...]
> 
Something like the history paper but concentrating on algorithms,
techniques & tricks would be great, yes. And, most importantly, less
buzzwords where you're lucky if you find a definition of it by googling.

> 1. What is "ghc-core"?
> 
An intermediate language, I'm quoting from memory:

First comes a Syntax tree, then the type checker, then the translation
to core, then optimisations on core (, then the printout) and finally
c/assembly.  

> 2. Does anybody know how to actually read GHC's Core output anyway?
> To me, it looks almost exactly like very, very complicated Haskell
> source with a suspicious concentration of case expressions - but I
> understand that in the Core language, many constructs actually mean
> something quite different.
> 
I found it rather easy to parse... as long as you succeed in finding
what you're looking for behind all that inlining. Types ending in #
mean they're unboxed.

It's particularly useful to find out how much ghc specialises your
code.

> 3. Any idea where the STG paper is? Is it still an accurate
> reflection of GHC's current technology?
>
http://research.microsoft.com/~simonpj/Papers/papers.html
Implementing lazy functional languages on stock hardware: the Spineless
Tagless G-machine, SL Peyton Jones, Journal of Functional Programming
2(2), Apr 1992, pp127-202.

while googling for it, I stumbled across 
http://citeseer.ist.psu.edu/reid98putting.html

which might be more actual, but I neither read it yet or have any
idea whatsoever.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Neil Mitchell
Hi

>  1. What is "ghc-core"?

You actually answer this question as part of question 2. Think of it
as simple Haskell with some additional bits.

>  2. Does anybody know how to actually read GHC's Core output anyway?
> To me,
> it looks almost exactly like very, very complicated Haskell source with a
> suspicious concentration of case expressions - but I understand that in the
> Core language, many constructs actually mean something quite different.

There is one different from standard Haskell I am aware of. In Core,
case x of _ -> 1 will evaluate x, in Haskell it won't. Other than
that, its just Haskell, but without pattern matching and only cases -
hence the rather high number of cases.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Don Stewart
ndmitchell:
> Hi
> 
> >  1. What is "ghc-core"?
> 
> You actually answer this question as part of question 2. Think of it
> as simple Haskell with some additional bits.
> 
> >  2. Does anybody know how to actually read GHC's Core output anyway?
> > To me,
> > it looks almost exactly like very, very complicated Haskell source with a
> > suspicious concentration of case expressions - but I understand that in the
> > Core language, many constructs actually mean something quite different.
> 
> There is one different from standard Haskell I am aware of. In Core,
> case x of _ -> 1 will evaluate x, in Haskell it won't. Other than
> that, its just Haskell, but without pattern matching and only cases -
> hence the rather high number of cases.

Well, 'let's too, which bind either unlifted or lifted values.

If they're binding lifted things, that's a thunk being allocated.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Endianess

2008-05-13 Thread Aaron Denney
On 2008-05-13, Andrew Coppin <[EMAIL PROTECTED]> wrote:
> Aaron Denney wrote:
>> On 2008-05-12, Andrew Coppin <[EMAIL PROTECTED]> wrote:
>>   
>>> (Stupid little-endian nonsense... mutter mutter...)
>>> 
>>
>> I used to be a big-endian advocate, on the principle that it doesn't
>> really matter, and it was standard network byte order.  Now I'm
>> convinced that little endian is the way to go, as bit number n should
>> have value 2^n, byte number n should have value 256^n, and so forth.
>>
>> Yes, in human to human communication there is value in having the most
>> significant bit first.  Not really true for computer-to-computer
>> communication.
>>   
>
> It just annoys me that the number 0x12345678 has to be transmuted into 
> 0x78563412 just because Intel says so. Why make everything so complicated?
>
> [Oh GOD I hope I didn't just start a Holy War...]

On the other hand I appreciate that the consecutive memory locations
containing [1][0][0][0] are the number 1, no matter whether you're
reading a byte, a short, or an int.

-- 
Aaron Denney
-><-

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Derek Elkins
On Tue, 2008-05-13 at 19:48 +0100, Tom Nielsen wrote:
> Yes. I guess I have to wait for chapter 19, then?

Just read the FFI Addendum:
http://www.cse.unsw.edu.au/~chak/haskell/ffi/

It's not complicated at all.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 17:01, Andrew Coppin wrote:

That definition of mean is wrong because it traverses the list  
twice. (Curiosity: would traversing it twice in parallel work any  
better?) As for the folds - I always *always* mix up


It might work "better" but you're still wasting a core that could be  
put to better use doing something more sensible.  It's pretty much  
always best to do all the calculations that require traversing a given  
list in a single traversal.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 17:12, Andrew Coppin wrote:


[Oh GOD I hope I didn't just start a Holy War...]



Er, I'd say it's already well in progress.  :/

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread John Meacham
I use a trick like this to allow saving of dynamics into ho files for
jhc, the same thing will work for network connections.

see Info.Info for the data type, and Info.Binary for the binary
serialization routines.

http://repetae.net/dw/darcsweb.cgi?r=jhc;a=tree;f=/Info

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HSFFIG compilation problem

2008-05-13 Thread Gwern Branwen
On 2008.05.13 23:31:17 +0200, Harri Kiiskinen <[EMAIL PROTECTED]> scribbled 
1.1K characters:
> Hello,
>
> I've tried to compile hsffig-1.0pl2, unsuccesfully, also from the
> unstable darcs repo. I'm using ghc-6.8 (ghc6-6.8.2 from Debian
> unstable). HSFFIG comes with its own version of Cabal, but I cannot get
> past the configuration phase (./cabal-setup configure), when I get the
> following output:
>
> Configuring HSFFIG-1.0...
> configure: searching for ghc in path.
> configure: found ghc at /usr/bin/ghc
> configure: looking for package tool: ghc-pkg near compiler
> in /usr/bin/ghc
> configure: found package tool in /usr/bin/ghc-pkg
> configure: Using install prefix: /usr/local
> configure: Using compiler: /usr/bin/ghc
> configure: Compiler flavor: GHC
> configure: Compiler version: 6.8.2
> configure: Using package tool: /usr/bin/ghc-pkg
> configure: Using haddock: /usr/bin/haddock
> configure: Using happy: /usr/bin/happy
> configure: Using alex: /usr/bin/alex
> configure: Using hsc2hs: /usr/bin/hsc2hs
> configure: Using cpphs: /usr/bin/cpphs
> configure: Reading installed packages...
> configure: Dependency base-any: using base-3.0.1.0
> cannot satisfy dependency text-any
>
> Does anyone have an idea, how the dependency 'text-any' could be
> satisfied?
>
> Harri K.

hsffig is old enough you should probably be using the latest, from Darcs 
(although the last patch is still from February 2006): 
. I got a little further by running 
make and then modifying the build-depends:

+Build-depends:  base>3, parsec, hxt, Cabal>=1.1.3, filepath, unix, process, 
containers, array, directory

Which leads to a bunch of build errors:

[ 5 of 14] Compiling TokenOps ( programs/TokenOps.hs, 
dist/build/ffipkg/ffipkg-tmp/TokenOps.o )

programs/TokenOps.hs:48:11:
No instance for (Text.Parsec.Prim.Stream
   s mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:48:11-45
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TCOMM_OPEN undefined) tf
In the definition of `commOpen':
commOpen = tokenTest (TCOMM_OPEN undefined) tf
 where
 tf (TCOMM_OPEN _) (TCOMM_OPEN _) = True
 tf _ _ = False

programs/TokenOps.hs:52:12:
No instance for (Text.Parsec.Prim.Stream
   s1 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:52:12-47
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s1 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TCOMM_CLOSE undefined) tf
In the definition of `commClose':
commClose = tokenTest (TCOMM_CLOSE undefined) tf
  where
  tf (TCOMM_CLOSE _) (TCOMM_CLOSE _) = True
  tf _ _ = False

programs/TokenOps.hs:58:10:
No instance for (Text.Parsec.Prim.Stream
   s2 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:58:10-50
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s2 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKFILE undefined undefined) tf
In the definition of `anyFile':
anyFile = tokenTest (TKFILE undefined undefined) tf
where
tf (TKFILE _ _) (TKFILE _ _) = True
tf _ _ = False

programs/TokenOps.hs:64:12:
No instance for (Text.Parsec.Prim.Stream
   s3 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:64:12-54
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s3 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKSTRING undefined undefined) tf
In the definition of `anyString':
anyString = tokenTest (TKSTRING undefined undefined) tf
  where
  tf (TKSTRING _ _) (TKSTRING _ _) = True
  tf _ _ = False

programs/TokenOps.hs:70:9:
No instance for (Text.Parsec.Prim.Stream
   s4 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:70:9-48
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s4 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKDEF undefined undefined) tf
In the definition of `anyDef':
anyDef = tokenTest (TKDEF undefined undefined) tf
   where
   tf (TKDEF _ _) (TK

Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Albert Y. C. Lai

Andrew Coppin wrote:
2. Does anybody know how to actually read GHC's Core output anyway? To 
me, it looks almost exactly like very, very complicated Haskell source 
with a suspicious concentration of case expressions - but I understand 
that in the Core language, many constructs actually mean something quite 
different.


I can read most of GHC Core. I was never taught, and I never asked. I 
did not read GHC's source code. I just guessed.


I agree with your assessment about the domination by case expressions. 
They spell out the evaluation order. You need truckloads of them.


I slightly agree with your assessment about "complicated", but I call it 
detailed and tedious. This is an assembly language for functional 
programming. If it weren't detailed and tedious, it would have no right 
to exist.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Commutative monads vs Applicative functors

2008-05-13 Thread Ronald Guida
I have a few questions about commutative monads and applicative functors.

>From what I have read about applicative functors, they are weaker than
monads because with a monad, I can use the results of a computation to
select between alternative future computations and their side effects,
whereas with an applicative functor, I can only select between the
results of computations, while the structure of those computations and
their side effects are fixed in advance.

But then there are commutative monads.  I'm not exactly sure what a
commutative monad is, but my understanding is that in a commutative
monad the order of side effects does not matter.

This leads me to wonder, are commutative monads still stronger than
applicative functors, or are they equivalent?

And by the way, what exactly is a commutative monad?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neal Alexander

Philippa Cowderoy wrote:

On Tue, May 13, 2008 5:53 am, Neal Alexander wrote:

I can post the full profiling info if anyone really cares.



Any info is helpful. It's taking a while to get round to things, but the
more relevant info we have to hand when we do the easier it is to improve
things and the less begging for data we have to do!


I stripped the code down to just the parsec related stuff and retested it.

http://72.167.145.184:8000/parsec_test/Parsec2.prof
http://72.167.145.184:8000/parsec_test/Parsec3.prof

And the parser with a 9mb (800 kb gziped) sample log file:
http://72.167.145.184:8000/parsec_test.tar.gz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Antoine Latter
On Tue, May 13, 2008 at 9:23 PM, Neal Alexander <[EMAIL PROTECTED]> wrote:
>  I stripped the code down to just the parsec related stuff and retested it.
>
>  http://72.167.145.184:8000/parsec_test/Parsec2.prof
>  http://72.167.145.184:8000/parsec_test/Parsec3.prof
>
>  And the parser with a 9mb (800 kb gziped) sample log file:
>  http://72.167.145.184:8000/parsec_test.tar.gz
>

So I've been picking at some ways to speed up Parsec 3.  I haven't had
much success at this benchmark, but one thing confused me:

In my hacked-up version, when I change the monadic type from a "data"
declaration to a "newtype" declaration, I get a significant slowdown.
In the program posted by Neal, I go from ~43 s with "data" to about 1
minute with a "newtype".

Is this expected?  I don't really understand why adding an extra layer
of indirection should speed things up.

-Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Bulat Ziganshin
Hello Antoine,

Wednesday, May 14, 2008, 8:43:47 AM, you wrote:

> Is this expected?  I don't really understand why adding an extra layer
> of indirection should speed things up.

adding laziness may improve performance by avoiding calculation of
unnecessary stuff or moving into into later stage when it will be
immediately consumed


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Richard A. O'Keefe


On 14 May 2008, at 8:58 am, Andrew Coppin wrote:
What I'm trying to say [and saying very badly] is that Haskell is an  
almost terrifyingly subtle language.


Name me a useful programming language that isn't.
Simply interchanging two for-loops, from
for (i = 0; i < N; i++) for (j = 0; j < N; j++)
to  for (j = 0; j < N; j++) for (i = 0; i < N; i++)
when marching over an array, can easily slow you down
by nearly two orders of magnitude in C.
[Hint: read "What every computer scientist needs to know
about memory".]  For a real shock, take a look at what
your C++ templates are doing...

There's one big difference between Haskell and language T (my other
preferred language).  Seemingly insignificant changes in Haskell can
kill performance, but seemingly insignificant changes in language T
can take you into territory the library designers never thought of
where there are lions, tigers, and bears in abundance.  "Unexpectedly
slow" is better than "inexplicably buggy".


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread Janis Voigtlaender

Graham Fawcett wrote:

Yes, but that's still a 'quick' short-circuiting. In your example, if
'n' is Nothing, then the 'f >>= g >>= h' thunks will not be forced
(thanks to lazy evaluation), regardless of associativity. Tracing
verifies this:


No, it doesn't. What your tracing verifies is that the f, g, and h will
not be evaluated. It doesn't verify that the 'f >>= g >>= h' part of the
expression causes no evaluation overhead. Because that is not true.
Consider the following:


import Debug.Trace



data Maybe' a = Nothing' | Just' a  deriving Show



instance Monad Maybe' where
return = Just'
Nothing' >>= _ = trace "match" Nothing'
Just' a  >>= k = k a



talk s = Just' . (trace s)
f = talk "--f"
g = talk "--g"
h = talk "--h"



foo n = n >>= f >>= g >>= h


Now:

*Main> foo Nothing'
match
match
match
Nothing'

So you get three pattern-matches on Nothing', where with the associative
variant


foo' n = n >>= \a -> f a >>= \b -> g b >>= h


you get only one:

*Main> foo' Nothing'
match
Nothing'

For a way to obtain such improvements automatically, and without
touching the code, you may want to look into

http://wwwtcs.inf.tu-dresden.de/~voigt/mpc08.pdf

Ciao, Janis.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Trying to avoid duplicate instances

2008-05-13 Thread oleg

Eric Stansifer wrote:
> I am using a bunch of empty type classes to categorize some objects:
>
> > class FiniteSolidObject o
> > class FinitePatchObject o
> > class InfiniteSolidObject o
>
> Since "solid objects" are exactly "finite solid objects" plus
> "infinite solid objects", there is an obvious way to code up this
> logical relationship.  So I try to write:
>
> > class SolidObject o
> > instance FiniteSolidObject o => SolidObject o
> > instance InfiniteSolidObject o => SolidObject o

There is an easy way to accomplish your goal in GHC, with a couple of
simple tricks. First of all, it pays in this and many related cases
do define the full predicate for the classification of objects. That is,
rather than defining the constraint (FiniteSolidObject o) that
succeeds when 'o' is really a FiniteSolidObject and fails otherwise,
define a constraint (FiniteSolidObjectP o f) which always succeeds; it
unifies the second argument with HTrue if o is indeed
FiniteSolidObject, and unifies f with HFalse otherwise. We can easily
define the semi-predicate (FiniteSolidObject o) if so needed.
That is the first trick. One can then introduce typeclasses HAnd, HOr,
etc. and use them to define more complex predicates like SolidObject
(which is the logical OR of FiniteSolidObjectP and
InfiniteSolidObjectP).

Defining the primitive predicates such as FiniteSolidObject does
including the trick, the TypeCast. It may appear bizarre; well, if the
point is to use it rather than contemplate it, one can just take the
pattern for granted. It works.
http://okmij.org/ftp/Haskell/typecast.html

But there is even a more uniform way, explained in
http://okmij.org/ftp/Haskell/poly2.hs
http://okmij.org/ftp/Haskell/poly2.txt

Using that file, we can write your code as follows:

{-# OPTIONS -fglasgow-exts #-}
{-# OPTIONS -fallow-undecidable-instances #-}
{-# OPTIONS -fallow-overlapping-instances #-}

module Mem where

-- primitive types
data Box= Box
data Sphere = Sphere
data Mesh   = Mesh
data Plane  = Plane

-- classes of types
type FiniteSolidObjects = Box :*: Sphere :*: HNil
type FinitePatchObjects = Mesh :*: HNil
type InfiniteSolidObjects = Plane :*: HNil

-- All of finite and infinite solid objects are solid objects
type SolidObjects = AllOf FiniteSolidObjects :*: AllOf InfiniteSolidObjects
:*: HNil

-- membership predicate
-- Statically tests if an object of the type x is a member of the class c
is_of_class :: forall c x r. Apply (Member c) x r => x -> c -> r
is_of_class x t = apply (undefined::Member c) x


test1 = is_of_class Box (undefined::FiniteSolidObjects) -- type HTrue
test2 = is_of_class Box (undefined::SolidObjects) -- type HTrue
test3 = is_of_class Box (undefined::InfiniteSolidObjects) -- type HFalse
test4 = is_of_class Plane (undefined::SolidObjects) -- type HTrue

-- make a semi-predicate SolidObject

class SolidObject c
instance Apply (Member SolidObjects) x HTrue => SolidObject x

test_solid :: SolidObject x => x -> ()
test_solid = undefined

ts1 = test_solid Plane
-- ts3 = test_solid Mesh -- causes the type error



-- The following is borrowed verbatim from poly2.hs

type Fractionals = Float :*: Double :*: HNil
type Nums = Int :*: Integer :*: AllOf Fractionals :*: HNil
type Ords = Bool :*: Char :*: AllOf Nums :*: HNil
type Eqs  = AllOf (TypeCl OpenEqs) :*: AllOfBut Ords Fractionals :*: HNil


-- The Fractionals, Nums and Ords above are closed. But Eqs is open
-- (i.e., extensible), due to the following:
data OpenEqs
instance TypeCls OpenEqs () HTrue -- others can be added in the future

-- Type class membership testing

data AllOf x
data AllOfBut x y
data TypeCl x

-- Classifies if the type x belongs to the open class labeled l
-- The result r is either HTrue or HFalse
class TypeCls l x r | l x -> r

-- the default instance: x does not belong
instance TypeCast r HFalse => TypeCls l x r

-- Deciding the membership in a closed class, specified
-- by enumeration, union and difference

data Member tl
instance Apply (Member HNil) x HFalse

instance TypeCls l x r => Apply (Member (TypeCl l)) x r

instance (TypeEq h x bf, MemApp bf t x r) 
=> Apply (Member (h :*: t)) x r

instance (Apply (Member h) x bf, MemApp bf t x r)
=> Apply (Member ((AllOf h) :*: t)) x r

instance (Apply (Member exc) x bf, Apply (MemCase2 h t x) bf r)
=> Apply (Member ((AllOfBut h exc) :*: t)) x r

class MemApp bf t x r | bf t x -> r
instance MemApp HTrue t x HTrue
instance Apply (Member t) x r => MemApp HFalse t x r

-- we avoid defining a new class like MemApp above.
-- I guess, after Apply, we don't need a single class ever?
data MemCase2 h t x
instance Apply (Member t) x r => Apply (MemCase2 h t x) HTrue r
instance Apply (Member ((AllOf h) :*: t)) x r 
=> Apply (MemCase2 h t x) HFalse r

testm1 = apply (undefined::Member Fractionals) (1::Float)
testm2 = apply (undefined::Member Fractionals) (1::Int)
testm3 = apply (undefined::Member Fractionals) ()


-- The standard HList stuff, extracted from HList lib