Re: [Haskell-cafe] cool tools

2012-05-20 Thread Neil Davies
+1
On 20 May 2012, at 01:23, Simon Michael wrote:

 Well said!
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to write Source for TChan working with LC.take?

2012-05-20 Thread Hiromi ISHII
Hello, there.

I'm writing a Source to supply values from TChan.
I wrote three implementations for that goal as follows:


import Data.Conduit
import qualified Data.Conduit.List as LC
import Control.Monad.Trans
import Control.Concurrent.STM
import Control.Monad

sourceTChanRaw :: MonadIO m = TChan a - Source m a
sourceTChanRaw ch = pipe
  where
pipe = PipeM next (return ())
next = do
  o - liftIO $ atomically $ readTChan ch
  return $ HaveOutput pipe (return ()) o

sourceTChanState :: MonadIO m = TChan a - Source m a
sourceTChanState ch = sourceState ch puller
  where
puller ch = StateOpen ch `liftM` (liftIO . atomically $ readTChan ch)

sourceTChanYield :: MonadIO m = TChan a - Source m a
sourceTChanYield ch = forever $ do
  ans - liftIO . atomically $ readTChan ch
  yield ans


Namely, one using raw Pipe constructors directly, using `sourceState` and 
`yield`.
I tested these with GHCi.


ghci ch - newTChanIO :: IO (TChan ())
ghci atomically $ replicateM_ 1500 $ writeTChan ch ()
ghci sourceTChanRaw ch $$ LC.take 10
[(),(),(),(),(),(),(),(),(),()]
ghci sourceTChanState ch $$ LC.take 10
[(),(),(),(),(),(),(),(),(),()]
ghci sourceTChanYield ch $$ LC.take 10
*thread blocks*


First two versions' result is what I exactly expected but the last one not: the 
source written with `yield` never returns value even if there are much enough 
value.

I also realized that following code runs perfectly as I expected:


ghci ch - newTChanIO :: IO (TChan ())
ghci atomically $ replicateM_ 1500 $ writeTChan ch ()
ghci sourceTChanRaw ch $= LC.isolate 10 $$ LC.mapM_ print
[(),(),(),(),(),(),(),(),(),()]
ghci sourceTChanState ch $= LC.isolate 10 $$ LC.mapM_ print
[(),(),(),(),(),(),(),(),(),()]
ghci sourceTChanYield ch $= LC.isolate 10 $$ LC.mapM_ print
[(),(),(),(),(),(),(),(),(),()]


So, here is the question:

Why the Source using `yield` doesn't work as expected with LC.take?

Or, might be

Semantically, what behaviour should be expected for LC.take?


Thanks,

-- Hiromi ISHII
konn.ji...@gmail.com




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to write Source for TChan working with LC.take?

2012-05-20 Thread Hiromi ISHII
Oops, sorry.
The last case's behaviour was not as I expected... A correct log is below:


ghci sourceTChanRaw ch $$ LC.isolate 10 =$= LC.mapM_ print
()
()
()
()
()
()
()
()
()
()
ghci sourceTChanState ch $$ LC.isolate 10 =$= LC.mapM_ print
()
()
()
()
()
()
()
()
()
()
ghci sourceTChanYield ch $$ LC.isolate 10 =$= LC.mapM_ print
()
()
()
()
()
()
()
()
()
()
*blocks*


So again, sourceTChanYield blocks here even if it is already supplied with 
enough values! 

-- Hiromi ISHII
konn.ji...@gmail.com




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Large graphs

2012-05-20 Thread Benjamin Ylvisaker
I have a problem that I'm trying to use Haskell for, and I think I'm running 
into scalability issues in FGL.  However, I am quite new to practical 
programming in Haskell, so it's possible that I have some other bone-headed 
performance bug in my code.  I tried looking around for concrete information 
about the scalability of Haskell's graph libraries, but didn't find much.  So 
here are the characteristics of the problem I'm working on:

- Large directed graphs.  Mostly 10k-100k nodes, but some in the low 100ks.
- Sparse graphs.  The number of edges is only 2-3x the number of nodes.
- Immutable structure, mutable labels.  After initially reading in the graphs, 
their shape doesn't change, but information flows around the graph, changing 
the labels on nodes and edges.

I wrote some code that reads in graphs and some some basic flow computations on 
them.  The first few graphs I tried were around 10k nodes, and the performance 
was okay (on the order of several seconds).  When I tried some larger graphs 
(~100k), the memory consumption spiked into multiple GB, the CPU utilization 
went down to single digit percentages and the overall running time was closer 
to hours than seconds.

Because the graph structure is basically immutable for my problem, I'm tempted 
to write my own graph representation based on mutable arrays.  Before I embark 
on that, I wonder if anyone else can share their experience with large graphs 
in Haskell?  Is there a library (FGL or otherwise) that should be able to scale 
up to the size of graph I'm interested in, if I write my code correctly?

Thanks,
Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Large graphs

2012-05-20 Thread Serguey Zefirov
2012/5/20 Benjamin Ylvisaker benjam...@fastmail.fm:
 I have a problem that I'm trying to use Haskell for, and I think I'm running 
 into scalability issues in FGL.  However, I am quite new to practical 
 programming in Haskell, so it's possible that I have some other bone-headed 
 performance bug in my code.  I tried looking around for concrete information 
 about the scalability of Haskell's graph libraries, but didn't find much.  So 
 here are the characteristics of the problem I'm working on:

 - Large directed graphs.  Mostly 10k-100k nodes, but some in the low 100ks.
 - Sparse graphs.  The number of edges is only 2-3x the number of nodes.
 - Immutable structure, mutable labels.  After initially reading in the 
 graphs, their shape doesn't change, but information flows around the graph, 
 changing the labels on nodes and edges.

I would like to suggest to you a representation based in 32-bit
integers as vertex index. I.e., roll your own

Use strict IntMap IntSet for neighbor information, it is very efficient.

 I wrote some code that reads in graphs and some some basic flow computations 
 on them.  The first few graphs I tried were around 10k nodes, and the 
 performance was okay (on the order of several seconds).  When I tried some 
 larger graphs (~100k), the memory consumption spiked into multiple GB, the 
 CPU utilization went down to single digit percentages and the overall running 
 time was closer to hours than seconds.

Looks like your code does not force everything. It leaves some thunks
unevaluated, check for that situation.

It is common pitfall, not only for computations on graphs.


 Because the graph structure is basically immutable for my problem, I'm 
 tempted to write my own graph representation based on mutable arrays.  Before 
 I embark on that, I wonder if anyone else can share their experience with 
 large graphs in Haskell?  Is there a library (FGL or otherwise) that should 
 be able to scale up to the size of graph I'm interested in, if I write my 
 code correctly?

The above structure (IntMap IntSet) allowed for fast computations on
relatively large arrays, in order of 1M vertices and 16M
undirected/32M directed edges.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] darcs patch dependencies in dot format

2012-05-20 Thread Soenke Hahn
On 05/16/2012 11:43 AM, wren ng thornton wrote:
 On 5/12/12 8:52 AM, Sönke Hahn wrote:
 Any comments or suggestions?
 
 Cabalize it and release it on Hackage. But especially the cabalization
 part :)

Both done:

http://hackage.haskell.org/package/darcs2dot



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Large graphs

2012-05-20 Thread Clark Gaebel
I had issues with FGL in the past, too. Although FGL is really nice to
work with, it just uses a ridiculous amount of memory for large
graphs.

In the end, I used Data.Graph from containers [1]. This was a lot more
reasonable, and let me finish my project relatively easily.

Regards,
  - Clark

[1] 
http://hackage.haskell.org/packages/archive/containers/0.5.0.0/doc/html/Data-Graph.html

On Sun, May 20, 2012 at 10:55 AM, Serguey Zefirov sergu...@gmail.com wrote:
 2012/5/20 Benjamin Ylvisaker benjam...@fastmail.fm:
 I have a problem that I'm trying to use Haskell for, and I think I'm running 
 into scalability issues in FGL.  However, I am quite new to practical 
 programming in Haskell, so it's possible that I have some other bone-headed 
 performance bug in my code.  I tried looking around for concrete information 
 about the scalability of Haskell's graph libraries, but didn't find much.  
 So here are the characteristics of the problem I'm working on:

 - Large directed graphs.  Mostly 10k-100k nodes, but some in the low 100ks.
 - Sparse graphs.  The number of edges is only 2-3x the number of nodes.
 - Immutable structure, mutable labels.  After initially reading in the 
 graphs, their shape doesn't change, but information flows around the 
 graph, changing the labels on nodes and edges.

 I would like to suggest to you a representation based in 32-bit
 integers as vertex index. I.e., roll your own

 Use strict IntMap IntSet for neighbor information, it is very efficient.

 I wrote some code that reads in graphs and some some basic flow computations 
 on them.  The first few graphs I tried were around 10k nodes, and the 
 performance was okay (on the order of several seconds).  When I tried some 
 larger graphs (~100k), the memory consumption spiked into multiple GB, the 
 CPU utilization went down to single digit percentages and the overall 
 running time was closer to hours than seconds.

 Looks like your code does not force everything. It leaves some thunks
 unevaluated, check for that situation.

 It is common pitfall, not only for computations on graphs.


 Because the graph structure is basically immutable for my problem, I'm 
 tempted to write my own graph representation based on mutable arrays.  
 Before I embark on that, I wonder if anyone else can share their experience 
 with large graphs in Haskell?  Is there a library (FGL or otherwise) that 
 should be able to scale up to the size of graph I'm interested in, if I 
 write my code correctly?

 The above structure (IntMap IntSet) allowed for fast computations on
 relatively large arrays, in order of 1M vertices and 16M
 undirected/32M directed edges.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] darcs patch dependencies in dot format

2012-05-20 Thread Soenke Hahn
On 05/13/2012 04:16 PM, Francesco Mazzoli wrote:
 I found Gephi (https://gephi.org/) quite good when I had to visualize
 big graphs, and it supports dot files so you can try it out easily.

gephi looks very interesting, thanks.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] darcs patch dependencies in dot format

2012-05-20 Thread Soenke Hahn
On 05/14/2012 04:21 PM, Simon Michael wrote:
 In a 2000-patch repo it took 22 hours:
 http://joyful.com/darcsden/simon/hledger/raw/patchdeps.pdf

:)

 It should escape double-quotes in patch names, I did that manually.

That should be fixed (in the repo).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-20 Thread Andreas Pauley
Hi all,

I'm in the process of learning how to approach problems from a
functional perspective, coming from an Object Oriented background
(mostly Smalltalk).

One of the general concerns/questions raised when talking to people in
a similar position is:
How do I design/model a problem when I don't have my trusted classes
and objects available?

With this in mind I've created a programming exercise where I imagine
an OO programmer would use an object hierarchy with subtype
polymorphism as part of the solution. And then I'd like to compare
functional implementations of the same problem:

https://github.com/apauley/HollingBerries

I want to see how elegant a solution I can get in a functional
language, given that the problem description is not really elegant at
all. It has a few annoying exceptions to the normal rules, typical of
what one might get in a real specification from some client.

Currently there are 3 implementations:
 - one in Erlang, my attempt at implementing a functional solution
 - one in Ruby, my attempt to see how an object hierarchy could be used
 - one in Clojure, done by one of the people in our FP user group [1]

I would love to include some Haskell implementations as well, if any
of you are interested.

Kind regards,
Andreas Pauley

1. http://www.meetup.com/lambda-luminaries/

-- 
http://pauley.org.za/
http://twitter.com/apauley

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] darcs patch dependencies in dot format

2012-05-20 Thread Soenke Hahn
On 05/16/2012 11:43 AM, wren ng thornton wrote:
 Also, have you compared your transitive reduction to just outputting the
 whole graph and then using `tred`? The latter approach has the distinct
 downside of needing to serialize the whole graph; but it could still be
 a win unless you intend to implement similar algorithms yourself. The
 smart algorithms do *much* better than brute force.

I've done some profiling and found that the executable is spending about
half of its time executing my brute force graph algorithm. So doing
something smarter here (like using tred) seems like a good idea.

The bad news is that without running my inefficient algorithm the
executable still doesn't scale well. Maybe there is a better way to let
the darcs library compute the patch dependencies.

 And of course it'd be nice to be able to pass arguments to the program
 in order to filter and otherwise manipulate the resulting graph. A lot
 of that can be done by special programs which only know about the Dot
 language (e.g., tred), so you should only focus on things which aren't
 captured by the Dot language or are otherwise only knowable by querying
 Darcs.

Sounds reasonable. Command line options would be nice.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Cardinal Haskell RBTree? And structural sharing.

2012-05-20 Thread Matt Lamari

I dug this haskell code of the internet wayback machine.  Don't worry,
Dons - it may be old; but it's still Golden. . .  it seems to be from
the Okasaki book with a working delete and key-value variant:

http://web.archive.org/web/20100629235553/http://www.cse.unsw.edu.au/~dons/data/RedBlackTree.html

I'm still at the lower end of Haskell expertise; but I can read this,
and am porting the concept into my Heresy library for CL.  The algorithm
seems to be proven via some haskell static means (that I don't fully
understand); but it also holds up to my heavy unit tests.

What I value from functional containers, though, is that the modded
structure shares the bulk of its identity/nodes with the prior variant. 
However, I added sharing as a metric to my unit test, so it can also
discover when a new (by identity) node exists in the modified/new
reference that is logically analogous to a node the old/prior one. . . .
.  which would waste space if people are holding various references to
evolving containers.

When purely translating this code, even in zero-change cases, one
function will build a red node from a black, only to have the caller use
*that* to build a brand new black - as across the 2 levels it has no way
of seeing that the optimal node is merely the original node.

Through some fundamental changes, and a bit of whack a mole
(especially in the delete case) I *think* I caught and removed them all.
. . .

Anyway, my point:  It seems that these containers can be analyzed with
respect to their pure algorithm/logical correctness; but also structural
storage.  Is there much talk on that side of the issue?  Is there any
cardinal reference for this or similar data structures with the storage
issue resolved?  I'd be interested in looking at another implementation,
especially one where the size issue isn't just tested; but proven. . . .




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-20 Thread Artyom Kazak

Challenge accepted! I have written a solution in Haskell; please merge :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-20 Thread Richard O'Keefe

On 19/05/2012, at 5:51 AM, Isaac Gouy wrote:
 In the 'tsort' case, it turns out that the Java and Smalltalk
 versions are I/O bound with over 90% of the time spent just
 reading the data. 
 
 My guess is that they could be written to do better than that - but it's 
 idiotic of me to say so without understanding the specifics, please 
 forgive me ;-)

Actually, I/O bound is *good*.

Here's the times from the C version, which has been hacked hard in order
to be as fast as I could make it.

 N total  input  processoutput
 1000; 0.004618 = 0.004107 + 0.000438 + 0.73
 2000; 0.014467 = 0.012722 + 0.001609 + 0.000136
 5000; 0.059810 = 0.051308 + 0.008199 + 0.000303
1; 0.204111 = 0.150638 + 0.052800 + 0.000673
2; 0.717362 = 0.518343 + 0.197655 + 0.001364
5; 3.517340 = 2.628550 + 0.885456 + 0.003331

N here is the number of nodes in the graph;
the number of edges is floor(N**1.5 * 0.75).
Input is the read-word + look up in hash table time.
Process is the compute-the-transitive-closure time.
Output is the time to write the node names in order.
All node names had the form x## with ## being 1..1.
This is with my own low level code; using scanf(%...s)
pushed the input time up by 40%.

The Mac OS X version of the tsort command took
31.65 CPU seconds for N=10,000, of which
28.74 CPU seconds was 'system'.

Like I said, the languages I used in this test
 ... have I/O libraries with very different
 structures, so what does identical algorithms mean?  If you
 are using dictionaries/hashmaps, and the two languages have
 implementations that compute different hash functions for strings,
 is _that_ using the same implementation? 
 
 Of course, to some degree, user defined hash functions remedy that specific 
 problem.

While creating other, and perhaps worse, ones.

For example, in the Smalltalk code, if you use a Dictionary of Strings,
you're getting Robert Jenkin's hash function in optimised C.  If you
supply your own, you're getting a very probably worse hash function
and it's going to run rather slower.  And above all, the stuff you are
benchmarking is no longer code that people are actually likely to write.

 But we're still going to ask - Will my program be faster if I write it in 
 language X? - and we're 
 still going to wish for a simpler answer than - It depends how you write it!

Here's another little example.  I had a use for the Singular Value Decomposition
in a Java program.  Should I use pure Java or native C?

Pure Java taken straight off the CD-ROM that came with a large
book of numerical algorithms in Java:   T seconds.

After noticing that the code was just warmed over Fortran, and was
varying the leftmost subscript fastest (which is good for Fortran,
bad for most other languages) and swapping all the matrix dimensions: T/2 
seconds.

After rewriting in C:  T/4 seconds.

After rewriting the C code to call the appropriate BLAS
and thereby using tuned code for the hardware, T/7 seconds.

Since this was going to take hundreds of seconds per run, the answer was easy.

A simple little thing like using a[i][j] vs a[j][i] made a
factor of 2 difference to the overall speed.

It depends is the second best answer we can get.
The best answer is one that says _what_ it depends on.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-20 Thread Artyom Kazak
Andreas Pauley apau...@gmail.com писал(а) в своём письме Sun, 20 May  
2012 20:33:13 +0300:



I want to see how elegant a solution I can get in a functional
language, given that the problem description is not really elegant at
all. It has a few annoying exceptions to the normal rules, typical of
what one might get in a real specification from some client.


After taking a look at other solutions, I feel like I will have to explain
myself, so I’d better do that without prompting :)

  - nothing was said about meaningful error messages, so I didn’t bother.
  - I had decided against defining constants like  
`supplier_markup_percentage_modification`
separately; `PremiumSupplierIDs` and markup table are defined locally  
in the `calc`
function, too. The latter two issues are fixed in the next version, as  
someone

may consider them to be against elegance.
  - surprisingly, all solutions use explicit comparisons to determine the  
product
category. While it is okay for continuous ranges of codes, it doesn’t  
scale and

not really elegant. Fixed as well.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-20 Thread Richard O'Keefe

 How much is hard to port a haskell program to C ?
 If it will become harder and harder, (i.e. for parallelizations) than
 it's fair to choose haskell for performance, but if it's not, I think
 it's hard to think that such a high level language could ever compile
 down to something running faster than its port to C.

There is a logic programming language called Mercury;
it has strict polymorphic types and strict modes and it supports
functional syntax as well as Horn clause syntax.  You could think
of it as 'strict Clean with unification'.

In the early days, they had a list processing benchmark where
the idiomatic Mercury version of the program was faster than
the idiomatic C version of the program, despite the fact that
at the time Mercury was compiling via C.

The answer was that the kind of C code being generated by Mercury
was not the kind of code any sane programmer would ever have written
by hand.  It really does depend on how you write it.

 Will hardware really go for hundreds of cores ?

You can already buy a 700 core machine (I have _no_ idea how many
chips are involved in that) for Java.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mighttpd 2.6.0 has been released

2012-05-20 Thread 山本和彦
Hello cafe,

I have released Mighttpd 2.6.0:

http://mew.org/~kazu/proj/mighttpd/en/

Some users started using Mighttpd 2 and I was requested to implement
missing features for real world operation. So, I implemented the
following features in Mighttpd 2.6.0:

- Route file reloading
- Graceful shutdown for upgrading
- URL rewriting (with HTTP redirection)

For more information, please read the homepage above.

I would like to thank Michael Snoyman for merging the necessary patch
to warp.

Regards,

--Kazu

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to write Source for TChan working with LC.take?

2012-05-20 Thread Michael Snoyman
I agree that this behavior is non-intuitive, but still believe it's
the necessary approach. The short answer to why it's happening is that
there's no exit path in the yield version of the function. To
understand why, let's expand the code a little bit. Realizing that

liftIO = lift . liftIO

and

lift mr = PipeM (Done Nothing `liftM` mr) (Finalize mr)

we can expand the yield version into:

sourceTChanYield2 ch = forever $ do
 let action = liftIO . atomically $ readTChan ch
 ans - PipeM (Done Nothing `liftM` action) (FinalizeM action)
 yield ans

So the first hint that something is wrong is that the finalize
function is calling the action. If you try to change that finalize
action into a no-op, e.g.:

sourceTChanYield3 :: MonadIO m = TChan a - Source m a
sourceTChanYield3 ch = forever $ do
 let action = liftIO . atomically $ readTChan ch
 ans - PipeM (Done Nothing `liftM` action) (return ())
 yield ans

then you get an error message:

test.hs:36:53:
Could not deduce (a ~ ())

The problem is that, as the monadic binding is set up here, the code
says after running the PipeM, I want you to continue by yielding, and
then start over again. If you want to expand it further, you can
change `forever` into a recursive call, expand `yield`, and then
expand all the monadic binding. Every finalization call is forcing
things to keep running.

And remember: all of this is the desired behavior of conduit, since we
want to guarantee finalizers are always called. Imagine that, instead
of reading data from a TChan, you were reading from a Handle. In the
code above, there was no way to call out to the finalizers.

Not sure if all of that rambling was coherent, but here's my
recommended solution. What we need is a helper function that allows
you to branch based on whether or not it's time to clean up. `lift`,
`liftIO`, and monadic bind all perform the same actions regardless of
whether or not finalization is being called. The following code,
however, works correctly:

liftFinal :: Monad m = m a - Finalize m () - (a - Source m a) - Source m a
liftFinal action final f = PipeM (liftM f action) final

sourceTChanYield :: Show a = MonadIO m = TChan a - Source m a
sourceTChanYield ch = liftFinal
(liftIO . atomically $ readTChan ch)
(return ())
$ \ans - do
yield ans
sourceTChanYield ch

Michael

On Sun, May 20, 2012 at 4:22 PM, Hiromi ISHII konn.ji...@gmail.com wrote:
 Oops, sorry.
 The last case's behaviour was not as I expected... A correct log is below:

 
 ghci sourceTChanRaw ch $$ LC.isolate 10 =$= LC.mapM_ print
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ghci sourceTChanState ch $$ LC.isolate 10 =$= LC.mapM_ print
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ghci sourceTChanYield ch $$ LC.isolate 10 =$= LC.mapM_ print
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 ()
 *blocks*
 

 So again, sourceTChanYield blocks here even if it is already supplied with 
 enough values!

 -- Hiromi ISHII
 konn.ji...@gmail.com




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-20 Thread Andreas Pauley
On Mon, May 21, 2012 at 12:54 AM, Artyom Kazak artyom.ka...@gmail.com wrote:
 Andreas Pauley apau...@gmail.com писал(а) в своём письме Sun, 20 May 2012
 20:33:13 +0300:


 I want to see how elegant a solution I can get in a functional
 language, given that the problem description is not really elegant at
 all. It has a few annoying exceptions to the normal rules, typical of
 what one might get in a real specification from some client.


 After taking a look at other solutions, I feel like I will have to explain
 myself, so I’d better do that without prompting :)

  - nothing was said about meaningful error messages, so I didn’t bother.
  - I had decided against defining constants like
 `supplier_markup_percentage_modification`
    separately; `PremiumSupplierIDs` and markup table are defined locally in
 the `calc`
    function, too. The latter two issues are fixed in the next version, as
 someone
    may consider them to be against elegance.
  - surprisingly, all solutions use explicit comparisons to determine the
 product
    category. While it is okay for continuous ranges of codes, it doesn’t
 scale and
    not really elegant. Fixed as well.

Thanks, I've merged this. Had a quick look at the code, I like it :-)

-- 
http://pauley.org.za/
http://twitter.com/apauley

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe