[Haskell-cafe] Parallel Haskell Digest 9

2012-04-20 Thread Eric Kow
Parallel Haskell Digest 9 
=

Hello Haskellers!

The Google Summer of Code is upon us and students have already submitted
their proposals. There are a couple potential projects on concurrent
data structures, which we'll have a look at below.

We will also be continuing our tour of Haskell concurrency abstractions
with our word month, *transaction*. This digest is brought to you by the
Parallel GHC project, an MSR-sponsored effort to push parallel Haskell
technologies out into the real world.  Check our project news below to
see how we're doing in that front.

Finally, you may heard Functional Propaganda from a Simon or two.
But how would the same message affect you if it came from a hardcore C++
hacker?  If you haven't seen it making the rounds, have a quick look at
Bartosz Milewski's [The Downfall of Imperative Programming][b0], and
maybe tell your imperative friends? The FP monster is hatching from its
academic egg; best be prepared!

[HTML version](http://www.well-typed.com/blog/65)

News
--
Let's have a quick look at some of those GSoC proposals, particularly
those with a parallel Haskell theme.  It's all about performance this
year. Two of the proposals involve using or improving parallellism
in Haskell, and four are related to high-performance concurrency.

*[Parallel CSG engine][g2]

 Constructive Solid Geometry (CSG) is the common approach to define
 complex bodies in engineering applications, ray tracing engines
 and physical simulators. Dmitry Dzhus' proposal is to deliver
 a fast parallel CSG engine using the Accelerate/Repa libraries
 for vectorised computations on GPU. 

*[NUMA supporting features for GHC][g7]
 
 Sajith Sasidharan wants to reach into the GHC RTS, with the
 aim of “extracting that last ounce of performance from NUMA
 systems, by firing all CPUs if/when necessary and by ensuring a
 suitably NUMA-aware memory allocation behavior.”  Work in this
 area would benefit folks doing scientific computing, who may
 need great amounts of task and data parallelism.

*[Windows support for the new GHC I/O manager][g6]

 The new I/O manager (GHC 7) brings great scalability improvements
 to Haskell — 10k simultaneous connections to your web server? no
 sweat.  Unfortunately for Haskellers on Windows, these improvements
 are currently only available on Unix.  Mikhail Glushenkov
 (who worked on GSoC last year to make improvements to Cabal)
 proposes to remedy this, making the new manager available on
 Windows. The challenge is that Windows I/O completion ports have
 slightly different semantics than their epoll/kqueue analogues on
 Unix.

*[Lock-free hash table and priority queue][g3]

 More along the theme of high performance concurrency, Florian
 Hartwig's project aims to use GHC's new-ish atomic compare-and-swap
 (CAS) primitive to implement a high-performance lock-free hash
 table and a lock-free priority queue in Haskell.  The CAS
 instruction is a bit of hardware support for concurrency: it
 compares a value in memory to an expected value, and iff they
 match, replaces it with a new value.

*[Implement Concurrent Hash-table / Hashmap][g5]

 Loren Davis also proposes a thread-safe mutable hash table
 implementation in Haskell. Loren is still weighing some of the
 alternative approaches suggested in the Haskell community. He is
 currently leaning towards a lock-stripping approach as it would
 fulfill an immediate need in the community.

*[Concurrent Datastructures with good Performance][g4]

 Mathias Bartl aims to implement two concurrent data types 
 in Haskell, along with the usual battery of automated unit tests
 and benchmarks. The two that Mathias has in mind are a lock-free
 concurrent bag, and a concurrent priority queue.

Parallel GHC project update
--
We have been continuing our work to make ThreadScope more helpful and
informative in tracking down your parallel and concurrent Haskell
performance problems.  We now have the ability to collect heap
statistics from the GHC runtime system and present them in ThreadScope.
These features will be available for users of a recent development GHC
(7.5.x) or the eventual 7.6 release. In addition to heap statistics,
we have been working on collecting information from hardware performance
counters, more specifically adding [support for Linux Perf
Events][hs-perf]. This could be useful for studying IO-heavy programs,
the idea being to visualise system calls as being distinct from
actual execution of Haskell code.

Speaking of performance, we are also continuing work on the new Cloud
Haskell implementation (see Duncan Coutts' [Fun in the Afternoon
Talk][t4]), and have lately been focused on reducing message 

[Haskell-cafe] Profiling with QtHaskell

2012-04-20 Thread Øystein Kolsrud
Hi! Does anyone know if it is possible to use QtHaskell with profiling
turned on?

-- 
Mvh Øystein Kolsrud
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: signed-multiset-0.1

2012-04-20 Thread Stefan Holdermans
Richard,

Thanks for your excellent feedback! Very helpful!

 Signed multisets are unfamiliar to most of us, and I for one found the paper 
 a little fast-paced.  Can you put a bit more into the documentation?

Definitely. That's what weekends are for... :) I'll add some sections to the 
documentation that explain the main concepts and intuitions.

 Just for starters, I found it confusing when the documentation talks about 
 an element with multiplicity zero, because in the _paper_ a value that has 
 multiplicity zero in an mzset is NOT an element of it.

I see... It seems that with multisets every value of the appropriate type is an 
element of the set, but some values (i.e., those with multiplicities  0) are 
more of an element than others (i.e., those with multiplicity 0). In the paper 
this is reflected by having the element-of relation a ternary relation 
between elements, sets, and multiplicities (with a functional dependency from 
elements and sets to multiplicities). Confusingly, not-an-element-of notation 
is used as an abbreviation for element-of with multiplicity zero.  

The library labels elements with multiplicities  0 as members.

 For a specific example, I haven't the faintest intuition about
 what 'map' should do.  Suppose we have
   {(k1)x1, (k2)x2}
 and f x1 == f x2 = y.  Should the value of map f {...} be
 {(k1+k2)y} or {(k1`max`k2)y} or what?

Should is a tricky term here. It depends on what you consider to be the 
structure of a multiset. Currently, map applies its function argument copywise:

  {x1_1,x1_2,...,x1_k1,x2_1,   x2_2,...,   x2_k2  }
   ||| |   |   |
   | f  | f  | f   | f | f | f
   vvv v   v   v
  {y_1, y_2, ...,y_k1, y_k1+1, y_k1+2,  ...,   y_k1+k2}∑

That is, we preserve the additive structure.

But perhaps this is inconsistent: the Monoid instance for SignedMultiset 
operates on the extremal structure rather than the additive structure. A 
wrapper type Additive,

  newtype Additive a = Additive (SignedMultiset a)

supplies a monoid instance that operates on the additive structure. So, 
perhaps, it's better to have map as it is implemented now, operate on Additive 
rather than SignedMultiset and have a map for SignedMultiset that uses max 
rather than (+).

A rationale for this would be that, this way, map would be a proper 
generalisation of ordinary sets. (The same rationale has been applied to union.)

Pondering over this, I realise that the difference operator also doesn't 
property generalise the concept for ordinary sets. However, that concept indeed 
seems a bit more tricky to generalise.

 Are you _sure_ that the definitions of isSubmultisetOf and
 isProperSubmultisetOf are correct?  Proper subthing is
 normally taken as requiring that only *one* member of the
 larger collection occur strictly fewer times in the smaller;
 here you require that for *all*.

You are right: isProperSubmultisetOf is too restrictive. I'll fix it.

Thanks,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: signed-multiset-0.1

2012-04-20 Thread Stefan Holdermans
Wren,

 For a specific example, I haven't the faintest intuition about
 what 'map' should do.  Suppose we have
  {(k1)x1, (k2)x2}
 and f x1 == f x2 = y.  Should the value of map f {...} be
 {(k1+k2)y} or {(k1`max`k2)y} or what?
 
 Good question. I'd suppose that they should be parametrized by any (Abelian?) 
 group on the weights/multiplicities, where (+) is the canonical one since 
 we're talking about negative membership.

Any groupoid on the multiplicities would do, I guess.

As I wrote in my answer to Richard, max seems a better choise, as it nicely 
generalises mapping on sets.

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Profiling with QtHaskell

2012-04-20 Thread Stefan Kersten
On 20.04.12 10:07, Øystein Kolsrud wrote:
 Hi! Does anyone know if it is possible to use QtHaskell with profiling turned 
 on?

afair i've used it when profiling an application (not qtHaskell itself). what's
the problem you're running into?

you need to compile the library with profiling support enabled, i have

library-profiling: True

in ~/.cabal/config

sk

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Profiling with QtHaskell

2012-04-20 Thread Øystein Kolsrud
Well, the problem was that I didn't know how to go about compiling it with
profiling support. Thanks for the tip! I'll try that out.

/Øystein

On Fri, Apr 20, 2012 at 12:08 PM, Stefan Kersten s...@k-hornz.de wrote:

 On 20.04.12 10:07, Øystein Kolsrud wrote:
  Hi! Does anyone know if it is possible to use QtHaskell with profiling
 turned on?

 afair i've used it when profiling an application (not qtHaskell itself).
 what's
 the problem you're running into?

 you need to compile the library with profiling support enabled, i have

 library-profiling: True

 in ~/.cabal/config

 sk

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Mvh Øystein Kolsrud
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Profiling with QtHaskell

2012-04-20 Thread Ivan Perez
FYI, you'll also have to compile all the dependencies with profiling on as well.

On 20 April 2012 12:40, Øystein Kolsrud kols...@gmail.com wrote:
 Well, the problem was that I didn't know how to go about compiling it with
 profiling support. Thanks for the tip! I'll try that out.

 /Øystein


 On Fri, Apr 20, 2012 at 12:08 PM, Stefan Kersten s...@k-hornz.de wrote:

 On 20.04.12 10:07, Øystein Kolsrud wrote:
  Hi! Does anyone know if it is possible to use QtHaskell with profiling
  turned on?

 afair i've used it when profiling an application (not qtHaskell itself).
 what's
 the problem you're running into?

 you need to compile the library with profiling support enabled, i have

 library-profiling: True

 in ~/.cabal/config

 sk

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Mvh Øystein Kolsrud

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: Now Accepting Applications for Mentoring Organizations for GSoC 2012

2012-04-20 Thread Ryan Newton
Did anyone end up being the co-admin?



On Thu, Mar 1, 2012 at 4:50 PM, Johan Tibell johan.tib...@gmail.com wrote:

 On Thu, Mar 1, 2012 at 1:42 PM, Ganesh Sittampalam gan...@earth.liwrote:

 On 01/03/2012 21:37, Johan Tibell wrote:
  On Thu, Mar 1, 2012 at 12:54 PM, Ganesh Sittampalam gan...@earth.li
  mailto:gan...@earth.li wrote:
 
  FYI, Edward Kmett has volunteered to do it again.
 
 
  That's great since he's the most experienced GSoC admin we have. :)
 
  There's still room for a replacement for me. I had a few people show
  interest so far.

 Maybe I'm confused about the roles, then. Were you co-admins previously,
 or something else?


 Edward and I were co-admins last year. That worked out great in my opinion.

 I want to make sure that we have at least one admin this year, otherwise
 we won't get any GSoC slots this year, hence my original email. I didn't
 know if Edward was interested or not. Now we have a few candidates,
 including Edward. My preference is to have 2+ admins, preferably with
 Edward as one of them as he has experience in the matter.

 Sorry about the confusions.

 Cheers,
   Johan


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
heinrich and all --

thanks for the illuminating comments, as usual.  i've had a little bit of time 
to play around with this and here's what i've concluded (hopefully i'm not 
mistaken.)

1 - while composeability makes STM a great silver bullet, there are other 
composable lower level paradigms.  aaron has identified a few fundamental 
algorithms that appear to be composable, and used a lot in concurrent 
algorithms / data structures : k-CAS, exponential backoff, helping and 
elimination.

2 - k-CAS (and more generally k-RMW) is composable.  i'm not exactly sure if 
i'd call k-CAS monadic but it is at least applicative (i'm not sure what the 
exact definition of k-CAS is.  a bunch of 1-CASs done atomically seems 
applicative; allowing them to interact seems monadic.  certainly k-RMW is 
monadic.)  while it is possible to implement STM on top of k-CAS and vice 
versa, k-CAS can get you closer to the metal, and if you can special case 1-CAS 
to hardware you will win on a lot of benchmarks.  a lot of concurrent 
algorithms only need 1-CAS.

3 - backoff, elimination and helping help scalability a lot, so you want 
support for them.  elimination and helping require communication between 
transactions, whereas STM is all about isolation, so reagents are fundamentally 
different in this regard.

reagents as implemented in the paper are not entirely monadic (by accident, i 
think the author intended them to be.)  as far as i can see he uses an 
applicative form of k-CAS, and the reagents on top of it are applicative : his 
computed combinator (monadic bind) does not allow post-composition (it has no 
continuation.)  there is no reason why it could not be built on top of a 
monadic k-RMW and be fully monadic.  however he is able to recognize and 
special-case 1-CAS, which gives great performance of course.

however, this does bring up a general question : why are applicative functors 
(often) faster than monads?  malcolm wallace mentioned this is true for 
polyparse, and heinrich mentioned this is true more generally.  is there a yoga 
by which one can write monadic functors which have a specialized, faster 
applicative implementation?

right now i'm reading up on k-CAS / k-RMW implementations, and i think i'm 
going to start writing that monad before moving on to elimination / helping.  
i'm finding a two things difficult :

- it is hard to represent a unified transaction log because of heterogeneous 
types, and 
- allowing a fully monadic interface makes it hard for me to special case 1-CAS 
(or applicative k-CAS.)

there are workarounds for the first issue (separate logs for each reference, or 
a global clock as in Transactional Locking II.)  for the second, right now i'm 
wondering if i'm going to have to write a compiler for a little DSL; i'd like 
to be able to exploit applicative performance gains generally, and special case 
1-CAS.  

best, ben

On Apr 6, 2012, at 5:38 AM, Heinrich Apfelmus wrote:

 Ben wrote:
 perhaps it is too late to suggest things for GSOC --
 but stephen tetley on a different thread pointed at aaron turon's
 work, which there's a very interesting new concurrency framework he
 calls reagents which seems to give the best of all worlds : it is
 declarative and compositional like STM, but gives performance akin to
 hand-coded lock-free data structures.  he seems to have straddled the
 duality of isolation vs message-passing nicely, and can subsume
 things like actors and the join calculus.
 http://www.ccs.neu.edu/home/turon/reagents.pdf
 he has a BSD licensed library in scala at
 https://github.com/aturon/ChemistrySet
 if someone doesn't want to pick this up for GSOC i might have a hand
 at implementing it myself.
 
 That looks great! While I didn't take the time to understand the concurrency 
 model in detail, the overall idea is to use arrows that can be run atomically
 
   runAtomically :: Reagent a b - (a - IO b)
 
 This is very similar to STM: combining computations within the monad/arrow is 
 atomic while combining computations outside the monad/arrow can interleave 
 them.
 
   runAtomically (f . g)  -- atomic
   runAtomically f . runAtomically g  -- interleaving
 
 
 Actually, it turns out that the  Reagent  arrow is also a monad, but the 
 author seems to claim that the static arrow style enables certain 
 optimizations. I haven't checked his model in detail to see whether this is 
 really the case and how exactly it differs from STM, but we know that 
 situations like this happen for parser combinators. Maybe it's enough to 
 recast reagents as an applicative functor?
 
 To summarize: the way I understand it is that it's apparently possible to 
 improve the STM monad by turning it into an arrow. (I refer to STM in a 
 very liberal sense here: whether memory is transactional or not is 
 unimportant, the only thing that matters is a computation that composes 
 atomically.)
 
 
 Best regards,
 Heinrich Apfelmus
 
 --
 http://apfelmus.nfshost.com
 
 
 

Re: [Haskell-cafe] [web-devel] http-enumerator: users?

2012-04-20 Thread Michael Litchard
If no one else wants to be responsible for maintaining, I vote for deprecation.

On Thu, Apr 19, 2012 at 2:33 AM, Michael Snoyman mich...@snoyman.com wrote:
 Hi all,

 I'm wondering if there are still active users out there of
 http-enumerator. Four months ago I released http-conduit, and since
 then, my development efforts have gone almost exclusively there. At
 this point, I'm not longer actively using http-enumerator on any of my
 projects, and while I've backported any security fixes to
 http-enumerator, it is otherwise not be updated.

 I see three possibilities for the future of http-enumerator:

 1. Deprecate it in favor of http-conduit
 2. Someone else takes over as maintainer
 3. Turn it into a frontend for http-conduit, exposing an enumerator
 interface. This will likely, though not necessarily, result in API
 breakage

 I'm curious how others feel on this. If I don't hear anything back on
 the topic, I'll assume that no one is interested in http-enumerator,
 and thus will opt for option (1).

 Michael

 ___
 web-devel mailing list
 web-de...@haskell.org
 http://www.haskell.org/mailman/listinfo/web-devel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TLS support for haskell-xmpp

2012-04-20 Thread John Goerzen

I'll probably give it a go then.

Incidentally, I am not sure if the original author is responsive.  I'm 
fixing some other bugs while I'm at it; would it be kosher for me to 
make a new version upload of it to Hackage in the event that the 
original author doesn't respond?


-- John

On 04/17/2012 04:31 AM, Vincent Hanquez wrote:

On 04/17/2012 04:05 AM, John Goerzen wrote:

Dmitry  others,

Attached is a diff implementing TLS support for haskell-xmpp, and 
correcting a build failure.


The support is rough but it seems to work so far.


It's a bid sad but gnutls is GPL-3 and haskell-xmpp BSD3, rendering 
the combination BSDless.
May i suggest in a shameless self advertising, the haskell tls library 
:-)




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread KC
Think of the differences (and similarities) of Applicative Functors and
Monads and the extra context that monads carry around.


-- 
--
Regards,
KC
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
i'm not sure what your email is pointing at.  if it is unclear, i understand 
the difference between applicative and monadic.  i suppose the easy answer to 
why applicative can be faster than monadic is that you can give a more 
specialized instance declaration.  i was just wondering if there was a way to 
make a monad recognize when it is being used applicatively, but that is 
probably hard in general.

b

On Apr 20, 2012, at 2:54 PM, KC wrote:

 Think of the differences (and similarities) of Applicative Functors and 
 Monads and the extra context that monads carry around.
 
 
 -- 
 --
 Regards,
 KC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread KC
Sorry, I thought you or someone was asking why are Applicative Functors
faster in general than Monads.

Functional programming is structured function calling to achieve a result
where the functions can be evaluated in an unspecified order; I
thought Applicative Functors had the same unspecified evaluation order;
whereas, Monads could carry some sequencing of computations which has the
extra overhead of continuation passing.

Do I have that correct?


On Fri, Apr 20, 2012 at 4:05 PM, Ben midfi...@gmail.com wrote:

 i'm not sure what your email is pointing at.  if it is unclear, i
 understand the difference between applicative and monadic.  i suppose the
 easy answer to why applicative can be faster than monadic is that you can
 give a more specialized instance declaration.  i was just wondering if
 there was a way to make a monad recognize when it is being used
 applicatively, but that is probably hard in general.

 b

 On Apr 20, 2012, at 2:54 PM, KC wrote:

  Think of the differences (and similarities) of Applicative Functors and
 Monads and the extra context that monads carry around.
 
 
  --
  --
  Regards,
  KC




-- 
--
Regards,
KC
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)

2012-04-20 Thread Ben
the sequencing matters for applicative functors.  from McBride and Patterson 
[1]:

The idea is that 'pure' embeds pure computations into the pure fragment of an 
effectful world -- the resulting computations may thus be shunted around 
freely, as long as the order of the genuinely effectful computations is 
preserved.

it is interesting to note that sequencing only matters a little for k-CAS : you 
just have to read before you write, but you can do the reads and writes in any 
order (as long as it is ultimately atomic.)

b

[1] McBride C, Patterson R. Applicative programming with effects Journal of 
Functional Programming 18:1 (2008), pages 1-13.

On Apr 20, 2012, at 4:41 PM, KC wrote:

 Sorry, I thought you or someone was asking why are Applicative Functors 
 faster in general than Monads.
 
 Functional programming is structured function calling to achieve a result 
 where the functions can be evaluated in an unspecified order; I thought 
 Applicative Functors had the same unspecified evaluation order; whereas, 
 Monads could carry some sequencing of computations which has the extra 
 overhead of continuation passing.
 
 Do I have that correct?
 
 
 On Fri, Apr 20, 2012 at 4:05 PM, Ben midfi...@gmail.com wrote:
 i'm not sure what your email is pointing at.  if it is unclear, i understand 
 the difference between applicative and monadic.  i suppose the easy answer to 
 why applicative can be faster than monadic is that you can give a more 
 specialized instance declaration.  i was just wondering if there was a way to 
 make a monad recognize when it is being used applicatively, but that is 
 probably hard in general.
 
 b
 
 On Apr 20, 2012, at 2:54 PM, KC wrote:
 
  Think of the differences (and similarities) of Applicative Functors and 
  Monads and the extra context that monads carry around.
 
 
  --
  --
  Regards,
  KC
 
 
 
 
 -- 
 --
 Regards,
 KC


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe