Re: [Haskell-cafe] Stackage with GHC 7.8 has started

2013-10-14 Thread Michael Snoyman
On Mon, Oct 14, 2013 at 3:42 PM, Joachim Breitner
m...@joachim-breitner.dewrote:

 Hi,

 Am Sonntag, den 13.10.2013, 17:50 +0200 schrieb Michael Snoyman:

  I wanted to announce that FP Complete is now running a Jenkins job to
  build Stackage with GHC 7.8. You can see the current results in the
  relevant Github issue[1]. Essentially, we're still trying to get
  version bounds updated so that a build can commence.

 Great!

 Is there a way to view the jenkins build results somewhere?

 For some reason I miss a proper homepage of stackage with links to all
 the various resources (but maybe I’m blind).


No, you're not blind, I just haven't gotten things set up in that manner
yet. Specifically for GHC 7.8, there's nothing to display. Until a pull
request on HTTP is merge[1], there's nothing to show at all from the
Jenkins builds. But once that's done, it would be hard to display the
Jenkins results, since I run half the jobs from my local system, and then
the other half from the FP Complete build server. If anyone has experience
with publishing Jenkin's build reports from two different systems and
wouldn't mind helping me out, please be in touch, it would be nice to get
the information available in a more publicly-accessible manner.

Michael

[1] https://github.com/haskell/HTTP/pull/47
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stackage with GHC 7.8 has started

2013-10-13 Thread Michael Snoyman
Hi everyone,

I wanted to announce that FP Complete is now running a Jenkins job to build
Stackage with GHC 7.8. You can see the current results in the relevant
Github issue[1]. Essentially, we're still trying to get version bounds
updated so that a build can commence.

I'd like to ask two things from the community:

* If you have a package with a restrictive upper bound, now's a good time
to start testing that package with GHC 7.8 and relaxing those upper bounds.
It would be great if, when GHC 7.8 is released, a large percentage of
Hackage already compiled with it.
* If you have a package on Hackage that is not yet on Stackage, now's a
great time to add it. We're going to be doing daily builds against three
versions of GHC (7.4.2, 7.6.3, and 7.8), which will help ensure your
packages continue to build consistently.

Michael

[1] https://github.com/fpco/stackage/issues/128
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Penny - double-entry accounting

2013-10-06 Thread Simon Michael

On 10/2/13 4:55 PM, Omari Norman wrote:

I'm pleased to make the first public announcement of the availability of
Penny, a double-entry command-line accounting system.


Hurrah! Congrats Omari.

Will there be a 1.0 release, or will you be forever chasing that number 
like me ?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Lifting IO actions into Applicatives

2013-10-01 Thread Michael Snoyman
I'm wondering if anyone's run into this problem before, and if there's a
common solution.

In Yesod, we have applicative forms (based originally on formlets). These
forms are instances of Applicative, but not of Monad. Let's consider a
situation where we want to get some user input to fill out a blog post
datatype, which includes the current time:

data Blog = Blog Title UTCTime Contents

myBlogForm :: Form Blog
myBlogForm = Blog $ titleForm * something * contentsForm

The question is: what goes in something? Its type has to be:

something :: Form UTCTime

Ideally, I'd call getCurrentTime. The question is: how do I lift that into
a Form? Since Form is only an Applicative, not a Monad, I can't create a
MonadIO instance. However, Form is in fact built on top of IO[1]. And it's
possible to create a MonadTrans instance for Form, since it's entirely
possible to lift actions from the underlying functor/monad into Form. So
something can be written as:

something = lift $ liftIO getCurrentTime

This works, but is unintuitive. One solution would be to have an
ApplicativeIO typeclass and then use liftIOA. My questions here are:

1. Has anyone else run into this issue?
2. Is there an existing solution out there?

Michael

[1] Full crazy definition is at:
http://haddocks.fpcomplete.com/fp/7.4.2/20130922-179/yesod-form/Yesod-Form-Types.html#t:AForm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lifting IO actions into Applicatives

2013-10-01 Thread Michael Snoyman
On Tue, Oct 1, 2013 at 12:15 PM, Dan Burton danburton.em...@gmail.comwrote:

 From what you've said, it sounds like you can already write:

 serverSide :: IO a - Form a

 This seems elegant enough to me for your needs. Just encourage it as an
 idiom specific to Forms.

 myBlogForm = Blog $ titleForm * serverSide getCurrentTime *
 contentsForm

 Could you abstract `serverSide` out into a typeclass, such as
 ApplicativeIO? Sure. but why bother? The point is, you've got the
 specialization you need already.



Yes, I agree that to simply solve the problem in yesod-form, this would be
a great solution. But as to why bother with ApplicativeIO: my point in
sending this email was to see if other people have been bothered by this,
and if it's therefore worth coming up with a general purpose solution. If
there's no real interest in it, I don't see a need to create such a general
solution. On the other hand, if people think this is worth a general
ApplicativeIO class, I'd be happy to use that instead of defining an ad-hoc
function in yesod-form.

Thanks to everyone for this great discussion, I'm thoroughly enjoying
following it.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lifting IO actions into Applicatives

2013-10-01 Thread Michael Snoyman
On Tue, Oct 1, 2013 at 10:24 AM, Alexey Uimanov s9gf4...@gmail.com wrote:

 Maybe this is needed new typeclass ApplicativeTrans?



There's actually no problem with defining a MonadTrans instance for
non-monads. Obviously this can't follow the laws directly (since they're
defined in terms of monadic bind and return), but I think we could probably
state Applicative versions of those laws (assuming I haven't made a stupid
mistake):

lift . pure = pure
lift (x * y) = lift x * lift y

Michael


 2013/10/1 Michael Snoyman mich...@snoyman.com

 I'm wondering if anyone's run into this problem before, and if there's a
 common solution.

 In Yesod, we have applicative forms (based originally on formlets). These
 forms are instances of Applicative, but not of Monad. Let's consider a
 situation where we want to get some user input to fill out a blog post
 datatype, which includes the current time:

 data Blog = Blog Title UTCTime Contents

 myBlogForm :: Form Blog
 myBlogForm = Blog $ titleForm * something * contentsForm

  The question is: what goes in something? Its type has to be:

 something :: Form UTCTime

 Ideally, I'd call getCurrentTime. The question is: how do I lift that
 into a Form? Since Form is only an Applicative, not a Monad, I can't create
 a MonadIO instance. However, Form is in fact built on top of IO[1]. And
 it's possible to create a MonadTrans instance for Form, since it's entirely
 possible to lift actions from the underlying functor/monad into Form. So
 something can be written as:

 something = lift $ liftIO getCurrentTime

 This works, but is unintuitive. One solution would be to have an
 ApplicativeIO typeclass and then use liftIOA. My questions here are:

 1. Has anyone else run into this issue?
 2. Is there an existing solution out there?

 Michael

 [1] Full crazy definition is at:
 http://haddocks.fpcomplete.com/fp/7.4.2/20130922-179/yesod-form/Yesod-Form-Types.html#t:AForm

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strange exit status behavior from the process package

2013-09-23 Thread Michael Xavier
Could I trouble you or anyone else to help me implement this feature? I
have some test processes, one that exits cleanly on sigterm and one that
refuses and must be killed abruptly. In some experimentation on GHCi,
things seem to go alright, but in test, either process reports that it has
terminated with 15, which is incorrect.

Test code:
https://github.com/MichaelXavier/Angel/blob/sigkill/test/Angel/JobSpec.hs#L38

Relevant implementation code:
https://github.com/MichaelXavier/Angel/blob/sigkill/src/Angel/Job.hs#L161
https://github.com/MichaelXavier/Angel/blob/sigkill/src/Angel/Process.hs#L40

I've spent quite a bit of time trying different solutions to this and have
failed to get the tests to pass.


On Sat, Sep 21, 2013 at 9:24 PM, Brandon Allbery allber...@gmail.comwrote:

 On Sat, Sep 21, 2013 at 11:12 PM, Michael Xavier 
 mich...@michaelxavier.net wrote:

 I've run into some strangeness with the process package. When you kill
 some processes on the command line you correctly get a non-zero exit
 status. However when using the process package's terminateProcess (which
 sends a SIGTERM), it returns an ExitSuccess:


 The 143 you get from the shell is synthetic (and nonportable). Signals are
 not normal exit codes; WEXITSTATUS is not defined in this case (but often
 will be 0, as seems to be shown here), instead WTERMSIG will be set to the
 signal that terminated the process. The caller should be using WIFEXITED /
 WIFSIGNALED / WIFSTOPPED to determine the cause of the termination and then
 the appropriate WEXITSTATUS / WTERMSIG / WSTOPSIG call to determine the
 value.

 It sounds like the createProcess API does not recognize signal exit at
 all, and uses WEXITSTATUS even when it is not valid.

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net




-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Strange exit status behavior from the process package

2013-09-21 Thread Michael Xavier
I've run into some strangeness with the process package. When you kill some
processes on the command line you correctly get a non-zero exit status.
However when using the process package's terminateProcess (which sends a
SIGTERM), it returns an ExitSuccess:

module Main (main) where

import Control.Concurrent (threadDelay)
import System.Process (createProcess, proc, getProcessExitCode,
terminateProcess)

main :: IO ()
main = do
  (_, _, _, ph) - createProcess $ proc /usr/bin/sleep [100]
  terminateProcess ph
  threadDelay 100
  print = getProcessExitCode ph

-- prints Just ExitSuccess, should be Just (ExitFailure 143)

term1: sleep 100
term2: pkill sleep
term1: echo $? # 143

Anyone know what might be going on?
-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN Vigilance: Get notified when periodical tasks fail to run successfully

2013-09-18 Thread Michael Xavier
Hey Cafe,

Just wanted to announce a project I've been tinkering with for a while
finally got to a state where I felt comfortable releasing it. Vigilance is
a Dead Man's Switch system that notifies you when periodical tasks that
fail to check in when you expected them to.

An example of this could be registering the daily backups you do of your
servers and have them send emails or send HTTP POST requests if backups
ever fail to check in. Vigilance provides an executable for doing check-ins
and inspecting your watches as well as a simple REST API if you need
something embeddable for existing projects.

HackageDB: http://hackage.haskell.org/package/vigilance
Github: http://github.com/michaelxavier/vigilance
Introductory blog post:
http://michaelxavier.net/posts/2013-09-17-Announcing-Vigilance-An-Extensible-Dead-Man-s-Switch-System.html


-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-16 Thread Michael Snoyman
On Mon, Sep 16, 2013 at 10:34 AM, John Lato jwl...@gmail.com wrote:

 On Fri, Sep 13, 2013 at 12:48 AM, Michael Snoyman mich...@snoyman.comwrote:




 On Thu, Sep 12, 2013 at 2:37 AM, John Lato jwl...@gmail.com wrote:

 I didn't see this message and replied privately to Michael earlier, so
 I'm replicating my comments here.


 Sorry about that, I wrote to you privately first and then thought this
 might be a good discussion for the cafe.


 1.  Sooner or later I expect you'll want something like this:

 class LooseMap c el el' where

   lMap :: (el - el') - c el - c el'





  It covers the case of things like hashmaps/unboxed vectors that have
 class constraints on elements.  Although maybe LooseFunctor or LFunctor is
 a better name.

 Probably something similar for Traversable would be good also, as would
 a default instance in terms of Functor.


 That's interesting. It's quite similar to the CanMap[1] class in
 classy-prelude or Each from lens, except it can drop a type parameter and
 the fundeps by requiring the container to be polymorphic. If we're willing
 to use more exotic extensions, ConstraintKinds could be useful as well:

 class ConstrainedMap t where
 type MapConstraint t e :: Constraint
 cMap :: (MapConstraint t e1, MapConstraint t e2) = (e1 - e2) - t
 e1 - t e2
 instance ConstrainedMap Set.Set where
 type MapConstraint Set.Set e = Ord e
 cMap = Set.map

 One reason I'd definitely not want to call this anything with the name
 Functor in it is because Set.map can violate the Functor laws, in
 particular:

 Set.map (f . g) /= Set.map f . Set.map g

 I believe the only law that could be applied to Set.map would be:

 Set.map f = Set.fromList . List.map f . Set.toList

 I would presume this would generalize to any other possible instance.


 Would it make more sense to just say that all instances must obey the
 Functor laws, thereby not allowing the Set instance?  That might make it
 easier to reason about using the class.  Although I've never needed that
 when I've used it in the past, so I guess whichever you think is more
 useful is fine by me.



I think I just made a bad assumption about what you were proposing. If I
was going to introduce a typeclass like this, I'd want it to support `Set`,
since IME it's the most commonly used polymorphic `map` operation that has
constraints. (Note that HashMap and Map are in fact Functors, since mapping
only affects their values, which are unconstrained.) I don't really have
any strong feelings on this topic, just that it would be nice to have
*some* kind
of a map-like function that worked on Set and HashSet.



 One final idea would be to take your LooseMap and apply the same kind of
 monomorphic conversion the rest of the library uses:

 class MonoLooseMap c1 c2 | c1 - c2, c2 - c1 where
 mlMap :: (Element c1 - Element c2) - c1 - c2
 instance (Ord e1, Ord e2) = MonoLooseMap (Set.Set e1) (Set.Set e2) where
 mlMap = Set.map

 Of all of them, ConstrainedMap seems like it would be the most
 user-friendly, as error messages would just have a single type parameter.
 But I don't have any strong leanings.


 I agree that ConstrainedMap would likely be the most user-friendly.  It
 also seems to best express the actual relationship between the various
 components, so it would be my preferred choice.


 [1]
 http://haddocks.fpcomplete.com/fp/7.4.2/20130829-168/classy-prelude/ClassyPrelude-Classes.html#t:CanMap


 2.  IMHO cMapM_ (and related) should be part of the Foldable class.
 This is entirely for performance reasons, but there's no downside since you
 can just provide a default instance.


 Makes sense to me, done. By the way, this can't be done for sum/product,
 because those require a constraint on the Element.


 3.  I'm not entirely sure that the length* functions belong here.  I
 understand why, and I think it's sensible reasoning, and I don't have a
 good argument against it, but I just don't like it.  With those, and
 mapM_-like functions, it seems that the foldable class is halfway to being
 another monolithic ListLike.  But I don't have any better ideas either.


 I agree here, but like you said in (2), it's a performance concern. The
 distinction I'd make from ListLike is that you only have to define
 foldr/foldl to get a valid instance (and even that could be dropped to just
 foldr, except for conflicts with the default signatures extension).




 As to the bikeshed color, I would prefer to just call the classes
 Foldable/Traversable.  People can use qualified imports to disambiguate
 when writing instances, and at call sites client code would never need
 Data.{Foldable|Traversable} and can just use these versions instead.  I'd
 still want a separate name for Functor though, since it's in the Prelude,
 so maybe it's better to be consistent.  My $.02.


 I prefer avoiding the name conflict, for a few reasons:

- In something like ClassyPrelude, we can export both typeclasses
without a proper if they have

Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-16 Thread Michael Snoyman
On Tue, Sep 17, 2013 at 4:25 AM, John Lato jwl...@gmail.com wrote:

 On Mon, Sep 16, 2013 at 4:57 AM, Michael Snoyman mich...@snoyman.comwrote:


 I think I just made a bad assumption about what you were proposing. If I
 was going to introduce a typeclass like this, I'd want it to support `Set`,
 since IME it's the most commonly used polymorphic `map` operation that has
 constraints. (Note that HashMap and Map are in fact Functors, since mapping
 only affects their values, which are unconstrained.) I don't really have
 any strong feelings on this topic, just that it would be nice to have *
 some* kind of a map-like function that worked on Set and HashSet.


 Ok, understood.  I most often use this with Data.Vector.Unboxed and
 Data.Vector.Storable, and that it would be useful for Set didn't really
 occur to me.

 Given that, I agree that a non-Functor name is a workable choice.



OK, I've added both LooseMap, and storable vector instances:

https://github.com/snoyberg/mono-traversable/commit/3f1c78eb12433a1e65d53b51a7fe1eb69ff80eec

Does that look reasonable?

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Quick Angel User's Survey

2013-09-14 Thread Michael Xavier
Hey Cafe,

I am the maintainer of Angel, the process monitoring daemon. Angel's job is
to start a configured set of processes and restart them when they go away.
I was responding to a ticket and realized that the correct functionality is
not obvious in one case, so I figured I'd ask the stakeholders: people who
use Angel. From what I know, most people who use Angel are Haskellers so
this seemed like the place.

When Angel is terminated, it tries to cleanly shut down any processes it is
monitoring. It also shuts down processes that it spawned when they are
removed from the config and the config is reloaded via the HUP signal. It
uses terminateProcess from System.Process which sends a SIGTERM to the
program on *nix systems.

The trouble is that SIGTERM can be intercepted and a process can still fail
to shut down. Currently Angel issues the SIGTERM and hopes for the best. It
also cleans pidfiles if there were any, which may send a misleading
message. There are a couple of routes I could take:

1. Leave it how it is. Leave it to the user to make sure stubborn processes
go away. I don't like this solution so much as it makes Angel harder to
reason about from a user's perspective.
2. Send a TERM signal then wait for a certain number of seconds, then send
an uninterruptable signal like SIGKILL.

There are some caveats with #2. I think I'd prefer the timeout to be
configurable per-process. I think I'd also prefer that if no timeout is
specified, we assume the user does not want us to use a SIGKILL. SIGKILL
can be very dangerous for some processes like databases. I want explicit
user permission to do something like this. If Angel generated a pidfile for
the process, if it should only be cleaned if Angel can confirm the process
is dead. Otherwise they should be left so the user can handle it.

So the real question: is the extra burden of an optional configuration flag
per process worth this feature? Are my assumptions about path #2 reasonable.

Thanks for your feedback!

-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: New syntax for Haskell

2013-09-14 Thread Michael Xavier
I just want to chime in to defend Cucumber, which I use in Ruby at my day
job. I see a lot of people put up the strawman that it can only be used as
a way for business people to write acceptance tests. That idea is
questionable and I've never worked at a company big enough to require that,
or with business people who have ever wanted to write my tests for me.

In Ruby, I use Cucumber purely for myself to drive high level acceptance
tests for products. I think the sweet spot for it is when you're starting
work on a high level feature and you have NO idea how it will be
implemented or even how it will work in detail. I find that writing in the
limited language that Gherkin provides keeps my brain from going right to
implementation details. I write out tests that explore how the system
should work. I write them in the perspective of the user (which you should
be doing in your head regardless because the user is the one who will
actually interact with your program). I then read them back and make sure
they make logical sense. Only then do I start hooking up the steps I wrote
to code that drives integration/acceptance tests, via a browser for
instance. At the end I have a failing cucumber test that describes the
system in an intuitive manner with zero line noise (programming language
syntax). I am now free to think about implementation details, write lower
level unit tests and implement things that can be described in much less
verbose fashion. I really like that process and if I ever had a job to
develop products in Haskell, I'd probably take a similar approach.

Do note that I advocate using Cucumber to create/drive user stories, not to
unit test low level functions like folds. If you don't have a customer of a
particular function who could describe how they interact with it in
layman's term, then Cucumber is the wrong tool. Use quickcheck/hunit/hspec
for that.


On Thu, Sep 12, 2013 at 3:42 PM, Bob Ippolito b...@redivi.com wrote:

 Have you tried AppleScript? I wouldn't say it's pleasant to use, but it's
 easy to read.


 On Thursday, September 12, 2013, David Thomas wrote:

 I've long been interested in a scripting language designed to be spoken.
 Not interested enough to go about making it happen... but the idea is
 fascinating and possibly useful.


 On Thu, Sep 12, 2013 at 2:57 PM, Andreas Abel andreas.a...@ifi.lmu.dewrote:

 **

 +1

 Cucumber seems to be great if you mainly want to read your code over the
 telephone, distribute it via national radio broadcast, or dictate it to
 your secretary or your voice recognition software.  You can program thus
 without having to use you fingers.  You can lie on your back on your sofa,
 close your eyes, and utter your programs...

 We could have blind Haskell/Cucumber programming contests...

 Tons of new possiblilities...

 Strongly support this proposal. ;-)

 Andreas

 On 2013-09-10 22:57, Artyom Kazak wrote:

 On Wed, 11 Sep 2013 00:20:26 +0400, Thiago Negri evoh...@gmail.com wrote:

 I hope these jokes do not cause people to be afraid to post new ideas.

 Agreed. I would also like to clarify that my message was much more a joke
 on
 the incomprehensibility of legal acts than on the original proposal.

 By the way, I am pretty impressed with this piece of Cucumber
 description/code:

Scenario: Mislav creates a valid task with an upload
  When I go to the Awesome Ruby Yahh task list page of the Ruby
 Rockstars project
  When I follow + Add Task
  And I fill in Task title with Ohhh upload
  And I follow Attachment
  When I attach the file features/support/sample_files/dragon.jpg to
 upload_file
  And I press Add Task
  And I wait for 1 second
  And I should see Ohhh upload as a task name

 I was much more sceptical when I had only seen the example in Niklas’s
 message.
 ___
 Haskell-Cafe mailing 
 listHaskell-Cafe@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Andreas Abel   Du bist der geliebte Mensch.

 Theoretical Computer Science, University of Munich 
 http://www.tcs.informatik.uni-muenchen.de/~abel/


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-13 Thread Michael Snoyman
On Fri, Sep 13, 2013 at 9:18 AM, Mario Blažević blama...@acanac.net wrote:

 On 09/13/13 01:51, Michael Snoyman wrote:

 On Fri, Sep 13, 2013 at 5:38 AM, Mario Blažević blama...@acanac.netmailto:
 blama...@acanac.net wrote:

 On 09/11/13 19:37, John Lato wrote:


 3.  I'm not entirely sure that the length* functions belong
 here.  I
 understand why, and I think it's sensible reasoning, and I
 don't have a
 good argument against it, but I just don't like it.  With
 those, and
 mapM_-like functions, it seems that the foldable class is
 halfway to
 being another monolithic ListLike.  But I don't have any
 better ideas
 either.


 If monolithic classes bother you, my monoid-subclasses
 package manages to break down the functionality into several
 classes. One big difference is that everything is based off Monoid
 rather than Foldable, and that has some big effects on the interface.



 I'd point out what I'd consider a bigger difference: the type signatures
 have changed in a significant way. With MonoFoldable, folding on a
 ByteString would be:

 (Word8 - b - b) - b - ByteString - b

 With monoid-subclasses, you get:

 (ByteString - b - b) - b - ByteString - b

 There's certainly a performance issue to discuss, but I'm more worried
 about semantics. Word8 tells me something very specific: I have one, and
 precisely one, octet. ByteString tells me I have anywhere from 0 to 2^32 or
 2^64  octets. Yes, we know from context that it will always be of size one,
 but the type system can't enforce that invariant.


 All true, but we can also use this generalization to our advantage.
 For example, the same monoid-subclasses package provides ByteStringUTF8, a
 newtype wrapper around ByteString. It behaves the same as the plain
 ByteString except its atomic factors are not of size 1, instead it folds on
 UTF-8 encoded character boundaries. You can't represent that in Haskell's
 type system.



I can think of two different ways of achieving this approach with
MonoFoldable instead: by setting `Element` to either `Char` or
`ByteStringUTF8`. The two approaches would look like:

newtype ByteStringUTF8A = ByteStringUTF8A S.ByteString
type instance Element ByteStringUTF8A = Char
instance MonoFoldable ByteStringUTF8A where
ofoldr f b (ByteStringUTF8A bs) = ofoldr f b (decodeUtf8 bs)
ofoldl' = undefined

newtype ByteStringUTF8B = ByteStringUTF8B S.ByteString
type instance Element ByteStringUTF8B = ByteStringUTF8B
instance MonoFoldable ByteStringUTF8B where
ofoldr f b (ByteStringUTF8B bs) = ofoldr (f . ByteStringUTF8B .
encodeUtf8 . T.singleton) b (decodeUtf8 bs)
ofoldl' = undefined

I'd personally prefer the first approach, as that gives the right
guarantees at the type level: each time the function is called, it will be
provided with precisely one character. I believe the second approach
provides the same behavior as monoid-subclasses does right now.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-13 Thread Michael Snoyman
On Fri, Sep 13, 2013 at 10:07 AM, Mario Blažević blama...@acanac.netwrote:

 On 09/13/13 02:28, Michael Snoyman wrote:




 On Fri, Sep 13, 2013 at 9:18 AM, Mario Blažević blama...@acanac.netmailto:
 blama...@acanac.net wrote:

 On 09/13/13 01:51, Michael Snoyman wrote:

 On Fri, Sep 13, 2013 at 5:38 AM, Mario Blažević
 blama...@acanac.net mailto:blama...@acanac.net
 mailto:blama...@acanac.net mailto:blama...@acanac.net wrote:

 On 09/11/13 19:37, John Lato wrote:


 3.  I'm not entirely sure that the length* functions
 belong
 here.  I
 understand why, and I think it's sensible reasoning, and I
 don't have a
 good argument against it, but I just don't like it.  With
 those, and
 mapM_-like functions, it seems that the foldable class is
 halfway to
 being another monolithic ListLike.  But I don't have any
 better ideas
 either.


 If monolithic classes bother you, my monoid-subclasses
 package manages to break down the functionality into several
 classes. One big difference is that everything is based
 off Monoid
 rather than Foldable, and that has some big effects on the
 interface.



 I'd point out what I'd consider a bigger difference: the type
 signatures have changed in a significant way. With
 MonoFoldable, folding on a ByteString would be:

 (Word8 - b - b) - b - ByteString - b

 With monoid-subclasses, you get:

 (ByteString - b - b) - b - ByteString - b

 There's certainly a performance issue to discuss, but I'm more
 worried about semantics. Word8 tells me something very
 specific: I have one, and precisely one, octet. ByteString
 tells me I have anywhere from 0 to 2^32 or 2^64  octets. Yes,
 we know from context that it will always be of size one, but
 the type system can't enforce that invariant.


 All true, but we can also use this generalization to our
 advantage. For example, the same monoid-subclasses package
 provides ByteStringUTF8, a newtype wrapper around ByteString. It
 behaves the same as the plain ByteString except its atomic factors
 are not of size 1, instead it folds on UTF-8 encoded character
 boundaries. You can't represent that in Haskell's type system.



 I can think of two different ways of achieving this approach with
 MonoFoldable instead: by setting `Element` to either `Char` or
 `ByteStringUTF8`. The two approaches would look like:

 newtype ByteStringUTF8A = ByteStringUTF8A S.ByteString
 type instance Element ByteStringUTF8A = Char
 instance MonoFoldable ByteStringUTF8A where
 ofoldr f b (ByteStringUTF8A bs) = ofoldr f b (decodeUtf8 bs)
 ofoldl' = undefined

 newtype ByteStringUTF8B = ByteStringUTF8B S.ByteString
 type instance Element ByteStringUTF8B = ByteStringUTF8B
 instance MonoFoldable ByteStringUTF8B where
 ofoldr f b (ByteStringUTF8B bs) = ofoldr (f . ByteStringUTF8B .
 encodeUtf8 . T.singleton) b (decodeUtf8 bs)
 ofoldl' = undefined

 I'd personally prefer the first approach, as that gives the right
 guarantees at the type level: each time the function is called, it will be
 provided with precisely one character. I believe the second approach
 provides the same behavior as monoid-subclasses does right now.


 Right now monoid-subclasses actually provides both approaches. You're
 correct that it provides the second one as instance FactorialMonoid
 ByteStringUTF8, but it also provides the former as instance TextualMonoid
 ByteStringUTF8. The TextualMonoid class is basically what you'd get if you
 restricted MonoFoldable to type Elem=Char. I wanted to keep the package
 extension-free, you see.


Got it, that makes sense.


 My main point is that it's worth considering basing MonoFoldable on
 FactorialMonoid, because it can be considered its specialization. Methods
 like length, take, or reverse, which never mention the item type in their
 signature, can be inherited from the FactorialMonoid superclass with no
 change whatsoever. Other methods would differ in their signatures (and
 performance), but the semantics would carry over.



My immediate concern is that this would enforce a number of restrictions on
what could be a MonoFoldable. For example, you couldn't have an instance
for `Identity a`. Being able to fold over any arbitrary container, even if
it's not a Monoid, can be very useful.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-12 Thread Michael Snoyman
On Thu, Sep 12, 2013 at 2:37 AM, John Lato jwl...@gmail.com wrote:

 I didn't see this message and replied privately to Michael earlier, so I'm
 replicating my comments here.


Sorry about that, I wrote to you privately first and then thought this
might be a good discussion for the cafe.


 1.  Sooner or later I expect you'll want something like this:

 class LooseMap c el el' where

   lMap :: (el - el') - c el - c el'


  It covers the case of things like hashmaps/unboxed vectors that have
 class constraints on elements.  Although maybe LooseFunctor or LFunctor is
 a better name.

 Probably something similar for Traversable would be good also, as would a
 default instance in terms of Functor.


That's interesting. It's quite similar to the CanMap[1] class in
classy-prelude or Each from lens, except it can drop a type parameter and
the fundeps by requiring the container to be polymorphic. If we're willing
to use more exotic extensions, ConstraintKinds could be useful as well:

class ConstrainedMap t where
type MapConstraint t e :: Constraint
cMap :: (MapConstraint t e1, MapConstraint t e2) = (e1 - e2) - t e1
- t e2
instance ConstrainedMap Set.Set where
type MapConstraint Set.Set e = Ord e
cMap = Set.map

One reason I'd definitely not want to call this anything with the name
Functor in it is because Set.map can violate the Functor laws, in
particular:

Set.map (f . g) /= Set.map f . Set.map g

I believe the only law that could be applied to Set.map would be:

Set.map f = Set.fromList . List.map f . Set.toList

I would presume this would generalize to any other possible instance.

One final idea would be to take your LooseMap and apply the same kind of
monomorphic conversion the rest of the library uses:

class MonoLooseMap c1 c2 | c1 - c2, c2 - c1 where
mlMap :: (Element c1 - Element c2) - c1 - c2
instance (Ord e1, Ord e2) = MonoLooseMap (Set.Set e1) (Set.Set e2) where
mlMap = Set.map

Of all of them, ConstrainedMap seems like it would be the most
user-friendly, as error messages would just have a single type parameter.
But I don't have any strong leanings.

[1]
http://haddocks.fpcomplete.com/fp/7.4.2/20130829-168/classy-prelude/ClassyPrelude-Classes.html#t:CanMap


 2.  IMHO cMapM_ (and related) should be part of the Foldable class.  This
 is entirely for performance reasons, but there's no downside since you can
 just provide a default instance.


Makes sense to me, done. By the way, this can't be done for sum/product,
because those require a constraint on the Element.


 3.  I'm not entirely sure that the length* functions belong here.  I
 understand why, and I think it's sensible reasoning, and I don't have a
 good argument against it, but I just don't like it.  With those, and
 mapM_-like functions, it seems that the foldable class is halfway to being
 another monolithic ListLike.  But I don't have any better ideas either.


I agree here, but like you said in (2), it's a performance concern. The
distinction I'd make from ListLike is that you only have to define
foldr/foldl to get a valid instance (and even that could be dropped to just
foldr, except for conflicts with the default signatures extension).


 As to the bikeshed color, I would prefer to just call the classes
 Foldable/Traversable.  People can use qualified imports to disambiguate
 when writing instances, and at call sites client code would never need
 Data.{Foldable|Traversable} and can just use these versions instead.  I'd
 still want a separate name for Functor though, since it's in the Prelude,
 so maybe it's better to be consistent.  My $.02.


I prefer avoiding the name conflict, for a few reasons:

   - In something like ClassyPrelude, we can export both typeclasses
   without a proper if they have separate names.
   - Error messages and documentation will be clearer. Consider how the
   type signature `ByteString - foo` doesn't let you know whether it's a
   strict or lazy bytestring.
   - I got specific feedback from Edward that it would be easier to include
   instances for these classes if the names didn't clash with standard
   terminology.
   - It leaves the door open for including this concept upstream in the
   future, even if that's not the goal for now.




 On Wed, Sep 11, 2013 at 3:25 PM, Michael Snoyman mich...@snoyman.comwrote:

 That's really funny timing. I started work on a very similar project just
 this week:

  https://github.com/snoyberg/mono-traversable

 It's not refined yet, which is why I haven't discussed it too publicly,
 but it's probably at the point where some review would make sense. There's
 been a bit of a discussion on a separate Github issue[1] about it.

 A few caveats:

- The names are completely up for debate, many of them could be
improved.
- The laws aren't documented yet, but they mirror the laws for the
polymorphic classes these classes are based on.
- The Data.MonoTraversable module is the main module to look at. The
other two are far

Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-12 Thread Michael Snoyman
On Fri, Sep 13, 2013 at 5:38 AM, Mario Blažević blama...@acanac.net wrote:

 On 09/11/13 19:37, John Lato wrote:

 I didn't see this message and replied privately to Michael earlier, so
 I'm replicating my comments here.

 1.  Sooner or later I expect you'll want something like this:

 class LooseMap c el el' where


 lMap :: (el - el') - c el - c el'

 It covers the case of things like hashmaps/unboxed vectors that have
 class constraints on elements.  Although maybe LooseFunctor or LFunctor
 is a better name.

 Probably something similar for Traversable would be good also, as would
 a default instance in terms of Functor.

 2.  IMHO cMapM_ (and related) should be part of the Foldable class.
 This is entirely for performance reasons, but there's no downside since
 you can just provide a default instance.

 3.  I'm not entirely sure that the length* functions belong here.  I
 understand why, and I think it's sensible reasoning, and I don't have a
 good argument against it, but I just don't like it.  With those, and
 mapM_-like functions, it seems that the foldable class is halfway to
 being another monolithic ListLike.  But I don't have any better ideas
 either.


 If monolithic classes bother you, my monoid-subclasses package
 manages to break down the functionality into several classes. One big
 difference is that everything is based off Monoid rather than Foldable, and
 that has some big effects on the interface.



I'd point out what I'd consider a bigger difference: the type signatures
have changed in a significant way. With MonoFoldable, folding on a
ByteString would be:

(Word8 - b - b) - b - ByteString - b

With monoid-subclasses, you get:

(ByteString - b - b) - b - ByteString - b

There's certainly a performance issue to discuss, but I'm more worried
about semantics. Word8 tells me something very specific: I have one, and
precisely one, octet. ByteString tells me I have anywhere from 0 to 2^32 or
2^64  octets. Yes, we know from context that it will always be of size one,
but the type system can't enforce that invariant.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-11 Thread Michael Snoyman
That's really funny timing. I started work on a very similar project just
this week:

https://github.com/snoyberg/mono-traversable

It's not refined yet, which is why I haven't discussed it too publicly, but
it's probably at the point where some review would make sense. There's been
a bit of a discussion on a separate Github issue[1] about it.

A few caveats:

   - The names are completely up for debate, many of them could be improved.
   - The laws aren't documented yet, but they mirror the laws for the
   polymorphic classes these classes are based on.
   - The Data.MonoTraversable module is the main module to look at. The
   other two are far more nascent (though I'd definitely appreciate feedback
   people have on them).

I think this and mono-foldable have a lot of overlap, I'd be interested to
hear what you think in particular John.

Michael

[1] https://github.com/snoyberg/classy-prelude/issues/18


On Wed, Sep 11, 2013 at 11:05 PM, John Lato jwl...@gmail.com wrote:

 I agree with everything Edward has said already.  I went through a similar
 chain of reasoning a few years ago when I started using ListLike, which
 provides a FoldableLL class (although it uses fundeps as ListLike predates
 type families).  ByteString can't be a Foldable instance, nor do I think
 most people would want it to be.

 Even though I would also like to see mapM_ in bytestring, it's probably
 faster to have a library with a separate monomorphic Foldable class.  So I
 just wrote one:

 https://github.com/JohnLato/mono-foldable
 http://hackage.haskell.org/package/mono-foldable

 Petr Pudlak has done some work in this area.  A big problem is that
 foldM/mapM_ are typically implemented in terms of Foldable.foldr (or
 FoldableLL), but this isn't always optimal for performance.  They really
 need to be part of the type class so that different container types can
 have specialized implementations.  I did that in mono-foldable, using
 Artyom's map implementation (Artyom, please let me know if you object to
 this!)

 pull requests, forks, etc all welcome.

 John L.


 On Wed, Sep 11, 2013 at 1:29 PM, Edward Kmett ekm...@gmail.com wrote:

 mapM_ is actually implemented in terms of Foldable, not Traversable, and
 its implementation in terms of folding a ByteString is actually rather slow
 in my experience doing so inside lens and isn't much faster than the naive
 version that was suggested at the start of this discussion.

 But as we're not monomorphizing Foldable/Traversable, this isn't a think
 that is able to happen anyways.

 -Edward


 On Wed, Sep 11, 2013 at 2:25 PM, Henning Thielemann 
 lemm...@henning-thielemann.de wrote:


 On Wed, 11 Sep 2013, Duncan Coutts wrote:

  For mapM etc, personally I think a better solution would be if
 ByteString and Text and other specialised containers could be an
 instance of Foldable/Traversable. Those classes define mapM etc but
 currently they only work for containers that are polymorphic in their
 elements, so all specialised containers are excluded. I'm sure there
 must be a solution to that (I'd guess with type families) and that would
 be much nicer than adding mapM etc to bytestring itself. We would then
 just provide efficient instances for Foldable/Traversable.


 I'd prefer to keep bytestring simple with respect to the number of type
 extensions. Since you must implement ByteString.mapM anyway, you can plug
 this into an instance definition of Traversable ByteString.



 ___
 Libraries mailing list
 librar...@haskell.org
 http://www.haskell.org/mailman/listinfo/libraries



 ___
 Libraries mailing list
 librar...@haskell.org
 http://www.haskell.org/mailman/listinfo/libraries


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: New syntax for Haskell

2013-09-10 Thread Michael Snoyman
I'll admit, I also thought it was a joke.


On Tue, Sep 10, 2013 at 2:34 PM, Ian Ross i...@skybluetrades.net wrote:

 Me too, but I wasn't brave enough to say so after people seemed to be
 taking it seriously...


 On 10 September 2013 13:33, Roman Cheplyaka r...@ro-che.info wrote:

 * John Wiegley jo...@fpcomplete.com [2013-09-10 04:48:36-0500]
   Niklas Hambüchen m...@nh2.me writes:
 
   Code written in cucumber syntax is concise and easy to read
 
  concise |kənˈsīs|, adj.
 
  giving a lot of information clearly and in a few words; brief but
  comprehensive.
 
  Compare:
 
  Scenario: Defining the function foldl
Given I want do define foldl
Which has the type (in brackets) a to b to a (end of brackets),
   to a, to list of b, to a
And my arguments are called f, acc, and l
When l is empty
Then the result better be acc
Otherwise l is x cons xs
Then the result should be foldl f (in brackets) f acc x
  (end of brackets) xs
 
  To:
 
  foldl :: (a - b - a) - a - [b] - a
  foldl f z [] = z
  foldl f z (x:xs) = foldl f (f z x) xs
 
  How is that more concise or preferable?

 I thought it was a joke.

 Roman

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 --
 Ian Ross   Tel: +43(0)6804451378   i...@skybluetrades.net
 www.skybluetrades.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can I use String without in ghci?

2013-09-01 Thread Michael Sloan
Not that I really want to encourage such a stringly typed practice, but
it wouldn't really be that much of a stretch.

* Use haskell-src-exts[0] and haskell-src-meta[1] to make a quasiquoter
that can parse Haskell syntax
* Use syb[2] or some other generics to find VarE and ConE expressions.  In
order to use SYB with TH, you'll want th-orphans[3]
* Use 'reify'[4] on the name of the variable or constructor, to see if it
exists.  If it doesn't[5], replace it with (LitE (StringL (nameBase name)))

Shouldn't really be much code at all! :D

-Michael

[0] http://hackage.haskell.org/package/haskell-src-exts
[1] http://hackage.haskell.org/package/haskell-src-meta
[2] http://hackage.haskell.org/package/syb
[3] http://hackage.haskell.org/package/th-orphans
[4]
http://hackage.haskell.org/packages/archive/template-haskell/latest/doc/html/Language-Haskell-TH.html#v:reify
[5] http://byorgey.wordpress.com/2011/08/16/idempotent-template-haskell/


On Sat, Aug 31, 2013 at 11:41 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk
 wrote:

 On 01/09/13 07:02, yi lu wrote:
  I want to know if it is possible that I use strings without .
 
  If I type
  *Preludefoo bar*
  which actually I mean
  *Preludefoo bar*
  However I don't want to type s.
 
  I have noticed if *bar* is predefined or it is a number, it can be used
 as
  arguments. But can other strings be used this way? Like in bash, we can
 use
  *ping 127.0.0.1* where *127.0.0.1* is an argument.
 
  If not, can *foo* be defined as a function so that it recognize arguments
  like *bar* as *bar*?
 
 
  Thanks,
  Yi Lu
 
 
 You can't do this non-trivially. I think your only bet would be Template
 Haskell using the second approach and even then, it's a huge, huge
 stretch. I highly recommend against such ideas though. Do you really
 want anything that's not bound to be treated as a String? (The answer is
 ‘no’). I suggest that you get used to ‘’s.

 If you have deep hatred for ‘’, you could resort to spelling out the
 strings like ['f', 'o', 'o'] or even 'f':'o':'o':[].

 It's a bit like asking whether you can do addition everywhere by just
 typing the numbers to each other (no cheating and defining number
 literals as functions ;) ).

 --
 Mateusz K.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Conduit : is it possible to write this function?

2013-08-23 Thread Michael Snoyman
You can build this up using the = operator[1] in stm-conduit, something
like:

eitherSrc :: MonadResourceBase m
 = Source (ResourceT m) a - Source (ResourceT m) b - Source
(ResourceT m) (Either a b)
eitherSrc src1 src2 = do
join $ lift $ Data.Conduit.mapOutput Left src1 =
Data.Conduit.mapOutput Right src2

I think this can be generalized to work with more base monads with some
tweaks to (=).

[1]
http://haddocks.fpcomplete.com/fp/7.4.2/20130704-120/stm-conduit/Data-Conduit-TMChan.html#v:-62--61--60-


On Fri, Aug 23, 2013 at 11:32 AM, Erik de Castro Lopo
mle...@mega-nerd.comwrote:

 Hi all

 Using the Conduit library is it possible to write the function:

eitherSrc :: MonadResource m
  = Source m a - Source m b - Source m (Either a b)

 which combines two sources into new output source such that data being
 produced aysnchronously by the original two sources will be returned
 as either a Left or Right of tne new source?

 If so, how?

 Cheers,
 Erik
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Diagrams and GTK

2013-08-13 Thread Michael Oswald

Hi Claude,


Cairo is thread safe* so you could render the whole thing (if it isn't
super huge dimensions) to an image surface in the background thread,
then displaying could be a matter of copying the correct part (for
scrolling) of the surface to the DrawingArea.


Yes, I think I will try this solution when I come back to the problem 
(currently there are tons of other work to do). Thanks!


lg,
Michael




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Diagrams and GTK

2013-08-09 Thread Michael Oswald

Hello,

I am currently writing an application which draws the structure of some 
packets with help of the diagrams library directly to a GTK GUI.


Now the packets can have several hundreds of parameters which have to be 
drawn so it takes some seconds to calculate the diagram itself. Of 
course this now blocks the GUI thread, so the basic idea was to put the 
calculation in a separate thread. Unfortunately this doesn't work as 
lazyness kicks in and the final diagram is calculated when it is 
rendered and not evaluated before. Using seq didn't help (because of 
WHNF) and there seems to be no deepseq instance for the diagrams.


Does somebody has an idea on how to speed this up / get the diagram 
evaluated strictly as especially scrolling the DrawingArea is now really 
a pain?


lg,
Michael


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: FunGEn 0.4.2, an easy cross-platform OpenGL/GLUT game engine

2013-08-08 Thread Simon Michael
I'm pleased to announce the hackage release of FunGEn 0.4!  
(Actually 0.4.2 as my 0.4 announcement did not reach the mail lists.)

FunGEn (Functional Game Engine) is a BSD-licensed, cross-platform,
OpenGL/GLUT-based, imperative game engine/framework.  With very few
dependencies and two example games, it's one of the easiest ways to
get started with game development in Haskell.

FunGEn was probably the first Haskell game framework, created by Andre
Furtado in 2002 (!). Here's his original feature list:

* Initialization, updating, removing, rendering and grouping
  routines for game objects;
* Definition of a game background (or map), including texture-based
  maps and tile maps;
* Reading and intepretation of the player's keyboard input;
* Collision detection;
* Time-based functions and pre-defined game actions;
* Loading and displaying of 24-bit bitmap files;
* Some debugging and game performance evaluation facilities;
* Sound support (actually for windows platforms only... :-[ )

What's new in 0.4.x:

* a new hakyll-based website, incorporating the old site content
* new haddock documentation
* tested with GHC 7.6
* fixed buggy input when holding down keys on windows
* input handlers now receive mouse position and modifier state
  (inspired by Pradeep Kumar; see fungentest.hs for examples)
* added q as quit key in examples

Home:http://joyful.com/fungen
Hackage: http://hackage.haskell.org/package/FunGEn
Code:http://hub.darcs.net/simon/fungen

Install from hackage: 

$ cabal update
$ cabal install FunGEn

Install source and run examples:

$ darcs get http://hub.darcs.net/simon/fungen
$ cd fungen
$ cabal install
$ (cd examples/pong; ghc pong; ./pong)
$ (cd examples/worms; ghc worms; ./worms)

Contribute patches:

- log in to hub.darcs.net and fork http://hub.darcs.net/simon/fungen
- push changes to your branch
- give me a pull request on #haskell-game

I have maintained FunGEn very sporadically. If you'd like to take it
and run with it, or co-maintain, let's chat! I'm sm on the
#haskell-game IRC channel.

-Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Angel 0.4.4

2013-07-30 Thread Michael Xavier
I'm pleased to announce the release of Angel 0.4.4.

angel is a daemon that runs and monitors other processes. It is similar to
djb's daemontools or the Ruby project god. It's goals are to keep a set of
services running, and to facilitate the easy configuration and restart of
those services.

New in 0.4.4

* Add env option to config to specify environment variables.
* Inject ANGEL_PROCESS_NUMBER environment variable into processes started
with count. This is intended for the purpose of logging, port enumeration,
etc.

Homepage: https://github.com/michaelxavier/angel
HackageDB: http://hackage.haskell.org/package/angel
-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] using network+conduit+tls for a client application?

2013-07-29 Thread Michael Snoyman
I've actually been intending to add the client side code to that package,
but I simply haven't gotten around to it yet. It's actually not that
complicated, but it does require some thought on the right interface for
things like approving/rejecting server side certificates. If you open up an
issue on Github for this, I'd be happy to continue the conversation there
and we can try to get out a new version of the library. (I just don't want
to spam the Cafe with an exploratory design discussion.)


On Mon, Jul 29, 2013 at 11:08 AM, Petr Pudlák petr@gmail.com wrote:

 Dear Haskellers,

 I wanted to write a small TLS application (connecting to IMAP over TLS)
 and it seemed natural to use conduit for that. I found the
 network-conduit-tls package, but then I realized it's meant only for server
 applications. Is there something similar for client applications?

   Thank you,
   Petr Pudlak

 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CmdArgs non-flag arguments with newtypes

2013-07-29 Thread Michael Orlitzky
On 07/23/2013 11:48 PM, wren ng thornton wrote:
 On 7/23/13 9:01 PM, Michael Orlitzky wrote:
 Obviously not what I want! Has anyone
 else run into this? Figured out a workaround?
 
 I haven't run into this specific problem, but I do have a sort of
 workaround. Whenever dealing with CmdArgs (or any similar system) I
 typically define *two* record types.
 
 The first one just gets the raw input from CmdArgs; no more, no less.
 Thus, for your issue, this record would use String. For other issues
 mentioned recently about optionality, this record would use Maybe.
 
 The second one is the one actually used by the program as the
 configuration object. This one is generated from the first by performing
 various sanity checks, filling in defaults, converting types from their
 CmdArgs version to the version I actually want, etc.
 
 IME, regardless of language, trying to conflate these notions of an
 external-facing parser-state and an internal-facing configuration-record
 just causes problems and accidental complexity. It's best to keep them
 separate IMO.
 

It's not the internal configuration and command-line arguments that I
wanted to share, but rather the config /file/ and command-line
arguments. Those two are then mashed together into the real
configuration data structure at run time.

The reason for the newtype is for the config file parsing -- without the
newtype, I'm forced to create orphan instances like some kind of animal =)

In any case, this turned out to be a real bug and not the usual user
error. Neil was nice enough to fix it, and it looks like it already hit
hackage, so I'm gonna go give it a shot!


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How can I use ghci more wisely?

2013-07-24 Thread Michael Sloan
This portion of haskell-mode (haskell-interactive-mode-eval-pretty) is what
the UI for something like this could look like:

https://www.youtube.com/watch?v=pu9AGSOySlE

This isn't an answer to your question, though, because expanding subparts
of the output doesn't drive evaluation.  It would be very cool, and quite
possible, to have a variant of the Show typeclass that had output with such
structured laziness.

Another non-answer is to take a look at using vaccum[0] and
vaccum-graphviz[1] together, to get an idea of the heap structure of
unforced values.  I've made a gist demonstrating how to use these to
visualize the heap without forcing values[2].  This doesn't show any
concrete values (as that would require some serious voodoo), but does show
how the heap changes due to thunks being forced.

-Michael

[0] http://hackage.haskell.org/package/vacuum
[1] http://hackage.haskell.org/package/vacuum-graphviz
[2] https://gist.github.com/mgsloan/6068915


On Tue, Jul 23, 2013 at 7:30 PM, yi lu zhiwudazhanjiang...@gmail.comwrote:

 I am wondering how can I ask ghci to show an infinite list wisely.
 When I type

 *fst ([1..],[1..10])*

 The result is what as you may guess

 *1,2,3,4,...*(continues to show, cut now)

 How could I may ghci show

 *[1..]*

 this wise way not the long long long list itself?

 Yi

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CmdArgs non-flag arguments with newtypes

2013-07-23 Thread Michael Orlitzky
I ran into an issue wrapping a [String] in a newtype with CmdArgs.
You're supposed to be able to declare that a field contains a list of
non-flag arguments... this works fine:

  data Cfg = Cfg { whatever flags, usernames :: [String] }
  arg_spec = Cfg { whatever flags, usernames = def = args }
  ...

If I now call my program with,

  ./foo --flag1 --flag2 arg1 arg2

then usernames = [ arg1, arg2 ] as desired. However, I need to wrap
usernames in a newtype for an unrelated reason:

  newtype Usernames = Usernames [String]

Now, CmdArgs unexpectedly drops the first non-flag argument. So,

  ./foo --flag1 --flag2 arg1 arg2

gives me usernames = [ arg2 ]. Obviously not what I want! Has anyone
else run into this? Figured out a workaround?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wrapping all fields of a data type in e.g. Maybe

2013-07-21 Thread Michael Orlitzky
On 07/20/2013 04:49 PM, adam vogt wrote:
 
 Hi Michael,
 
 It's fairly straightforward to generate the new data with template
 haskell [1], and on the same page, section 10.7 'generic' zipWith is
 likely to be similar to your merging code.
 
 [1] 
 http://www.haskell.org/haskellwiki/Template_Haskell#Generating_records_which_are_variations_of_existing_records
 

I don't know any TH yet, but this looks like it just might work. Thanks
for the suggestion!



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: darcsden 1.1 released, darcs hub news 2013/07

2013-07-20 Thread Simon Michael
Hi all, here is a combined darcsden and darcs hub update.

darcsden 1.1 released
=

darcsden 1.1 is now available on hackage! This is the updated version
of darcsden which runs hub.darcs.net, so these changes are also
relevant to that site's users. (More darcs hub news below.)

darcsden is a web application for browsing and managing darcs
repositories, issues, and users, plus a basic SSH server which lets
users push changes without a system login. It is released under the
BSD license. You can use it:

- to browse and manage your local darcs repos with a more comfortable UI
- to make your repos browsable online, optionally with issue tracking
- to run a multi-user darcs hosting site, like hub.darcs.net

http://hackage.haskell.org/package/darcsden - cabal package \
http://hub.darcs.net/simon/darcsden - source \
http://hub.darcs.net/simon/darcsden/issues  - bug tracker

Release notes for 1.1
-

Fixed:

* 16: Layout of links and navigation places them offscreen
* 21: anchors on line numbers exist but line numbers not clickable
* 28: forking then deleting a private repo makes repos unviewable
* 29: darcs get to an invalid ssh repo url hangs
* 46: if user kills a push, the lock file is not removed, preventing
  subsequent pushes

New:

* the signup page security question is case-insensitive (darcs)
* login redirects to the my repos page
* a more responsive layout, with content first, buttons at top/right
* many other UI updates; font, headings, borders, whitespace, robustness
* more context sensitivity in buttons  links
* better next/previous page controls
* better support for microsoft windows, runs as a service
* builds with GHC 7.6 and latest libraries
* easier developer builds

Brand new, from the Enhancing Darcsden GSOC (some WIP):

* you can sign up, log in, and link existing accounts with your Google
  or Github id
* you can reset your password
* you can edit files through the web
* you can pack your repositories, allowing faster darcs get

Detailed change log: http://hub.darcs.net/simon/darcsden/CHANGES.md

How to help
---

darcsden is a small, clean codebase that is fun to hack on. Discussion
takes place on the #darcs IRC channel, and useful changes will quickly
be deployed at hub.darcs.net, providing a tight dogfooding/feedback
loop. Here's how to contribute a patch there:

1. register at hub.darcs.net
2. add your ssh key in settings so you can push
3. fork your own branch: http://hub.darcs.net/simon/darcsden , fork
4. copy to your machine: darcs get http://hub.darcs.net/yourname/darcsden
5. make changes, darcs record
6. push to hub: darcs push yourn...@hub.darcs.net:darcsden --set-default
7. your change will appear at http://hub.darcs.net/simon/darcsden/patches
8. discuss on #darcs, or ping me (sm, si...@joyful.com) to merge it

Credits
---

Alex Suraci created darcsden. Simon Michael led this release, which
includes contributions from Alp Mestanogullari, Jeffrey Chu, Ganesh
Sittampalam, and BSRK Aditya (sponsored by Google's Summer of Code).
And last time I forgot to mention two 1.0 contributors: Bertram
Felgenhauer and Alex Suraci.

darcsden depends on Darcs, Snap, GHC, and other fine projects from the
Haskell ecosystem, as well as Twitter Bootstrap, JQuery, and many more.





darcs hub news 2013/07
==

http://hub.darcs.net , aka darcs hub, is the darcs repository hosting
site I operate. It's like a mini github, but using darcs. You can:

- browse users, repos, files and changes
- publish darcs repos publicly or privately
- get, push and pull repos over ssh
- grant push access to other members
- fork repos, then view and merge upstream and downstream changes
- track issues

The site was announced on 2012/9/15
(http://thread.gmane.org/gmane.comp.version-control.darcs.user/26556).
Since then:

- The site has been deploying new darcsden work promptly; it includes
  all the 1.1 release improvements described above.

- The server's ram has doubled from 1G to 2G (thanks Linode). This
  means app restarts due to excessive memory use are less frequent.

- The front page's user list had become slow and has been optimised,
  halving the page load time.

- BSRK Aditya is doing his Google Summer of Code project on enhancing
  darcsden and darcs hub (mentored by darcs developer Ganesh
  Sittampalam). Find out more at http://darcs.net/GSoC/2013-Darcsden .

- The site is being used, with many small projects and a few
  well-known larger ones. Quick stats as of 2013/07/19:

user accounts   317
repos   579
disk usage2.5G
uptime last 30 days  99.48%
average response time last 30 days1.6s

- The site remains free to use, including private repos.  Eventually,
  some kind of funding will be needed to keep it self-sustaining, and
  could also enable faster development

Re: [Haskell-cafe] Wrapping all fields of a data type in e.g. Maybe

2013-07-19 Thread Michael Orlitzky
On 07/16/2013 04:57 PM, Michael Orlitzky wrote:
 
 This all works great, except that when there's 20 or so options, I
 duplicate a ton of code in the definition of OptionalCfg. Is there some
 pre-existing solution that will let me take a Cfg and create a new type
 with Cfg's fields wrapped in Maybe?
 

For posterity, I report failure =)

If I parameterize the Configuration type by a functor, it breaks the
DeriveDataTypeable magic in cmdargs. The resulting manual definitions
along with the lenses to look inside the Identity functor well exceed
the duplicated code from OptionalCfg.

Combining the option parsing and config file parsing increases the
amount of code in the command-line parser by roughly an equal amount,
but in my opinion a worse consequence is that it conflates two unrelated
procedures. I very much like this:

  rc_cfg  - from_rc
  cmd_cfg - apply_args
  let opt_config = rc_cfg  cmd_cfg
  ...

All things considered the duplicated data structure seems like the least
of three evils.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TH splicing and recompilation checking

2013-07-16 Thread Michael Sloan
Yup, such a thing exists!  I think it's a little bit obscure because for
some bizarre reason it isn't reexported by Language.Haskell.TH:

http://hackage.haskell.org/packages/archive/template-haskell/2.8.0.0/doc/html/Language-Haskell-TH-Syntax.html#v:addDependentFile

-Michael


On Tue, Jul 16, 2013 at 10:41 AM, Johannes Waldmann 
waldm...@imn.htwk-leipzig.de wrote:

 Hi.

 we are using template Haskell to splice in some code
 that is produced by reading and transforming the contents of another file.

 now, if this other file is touched (by editing),
 but not the main file, then ghc (and cabal) do not realize
 that the main file does need to be recompiled.

 is there a way to tell them about the dependency?

 (example main file:
 https://github.com/apunktbau/co4/blob/master/CO4/Test/Queens.hs)

 - J.W.



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Wrapping all fields of a data type in e.g. Maybe

2013-07-16 Thread Michael Orlitzky
I have a common pattern in my command-line programs; I start out with a
configuration data type, which over-simplified looks like:

  data Cfg = Cfg { verbose :: Bool }

Now, there's usually a default configuration,

  default :: Cfg
  default = Cfg False

The user can override the defaults one of two ways, either via a config
file, or from the command-line. If both are specified, the command-line
takes precedence. The way I do this is with,

  data OptionalCfg = OptionalCfg { verbose :: Maybe Bool }

And then I define I Monoid instance for OptionalCfg which lets me merge
two ofthem. Once the two OptionalCfgs are merged, I merge *that* with
the default Cfg.

This all works great, except that when there's 20 or so options, I
duplicate a ton of code in the definition of OptionalCfg. Is there some
pre-existing solution that will let me take a Cfg and create a new type
with Cfg's fields wrapped in Maybe?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wrapping all fields of a data type in e.g. Maybe

2013-07-16 Thread Michael Orlitzky
On 07/16/2013 05:06 PM, Tom Ellis wrote:
 On Tue, Jul 16, 2013 at 04:57:59PM -0400, Michael Orlitzky wrote:
 This all works great, except that when there's 20 or so options, I
 duplicate a ton of code in the definition of OptionalCfg. Is there some
 pre-existing solution that will let me take a Cfg and create a new type
 with Cfg's fields wrapped in Maybe?
 
 You can always try
 
 data Cfg f = Cfg { verbose :: f Bool }
 
 and set f to Maybe or Identity depending on what you use it for.  It will be
 slightly notationally cumbersome to extract values from the Identity functor
 though.
 

Two votes for this approach. I'll give it a try and see whether it comes
out more or less verbose. Thanks!



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wrapping all fields of a data type in e.g. Maybe

2013-07-16 Thread Michael Orlitzky
On 07/16/2013 08:41 PM, John Lato wrote:
 The suggestion of parameterizing on a functor would be good, however
 there's another approach I've often seen (although it's not quite what
 you've asked for).  You can leave your config datatype alone, but
 instead of making it a monoid have your configuration parsers return
 functions with the type (Cfg - Cfg).  You can wrap these functions in
 Endo to get a monoid, combine them together, and then apply that
 function to the default configuration.
 

I'm using cmdargs for the command-line parsing, and I think (if I don't
want to abandon its magic entirely) that I'm stuck filling a data
structure automatically.

I settled on using (Maybe Foo) so that the default value returned by
cmdargs will be Nothing if the user doesn't supply that option; if I use
a plain Cfg object, and the user doesn't pass --verbose, I'll get False
back in its place and then I don't know whether or not that should
override the config file (supposing the user has verbose=True in the file).

Very clever though.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reify type

2013-07-12 Thread Michael Sloan
Hello!

I'm not sure if this is what you're asking for, as it doesn't fit that line
of code.  'LitT' is a data constructor not a type constructor.  So instead
it'd be

reifyType (LitT ...) = ConE 'LitT ...

If this is what you're looking for, then 'lift' is what you want:
http://hackage.haskell.org/packages/archive/th-lift/latest/doc/html/Language-Haskell-TH-Lift.htmlhttp://hackage.haskell.org/packages/archive/th-lift/0.5.5/doc/html/Language-Haskell-TH-Lift.html

In particular, I recommend using this package of template haskell orphans,
rather than deriving your own: http://hackage.haskell.org/package/th-orphans

Hope that helps!
-Michael



On Fri, Jul 12, 2013 at 4:45 AM, Jose A. Lopes jabolo...@google.com wrote:

 Hello everyone,

 Is there a way to automatically reify a type ?
 In other words, to do the following:

 reifyType (LitT ...) = ConT ''LitT ...

 I am using Template Haskell and I want the generated code to have
 access to Type datatypes that were available to the Template Haskell
 code.

 Cheers,
 Jose

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reify type

2013-07-12 Thread Michael Sloan
You might need to cabal update - I recently uploaded a new version to
hackage, because I realized the package was a bit out of date from the
github repo.

It works for me: https://gist.github.com/mgsloan/f9238b2272df43e53896


On Fri, Jul 12, 2013 at 5:49 AM, Jose A. Lopes jabolo...@google.com wrote:

 Hello,

 I am getting the following error message:

 No instance for (Lift Type)
   arising from a use of `lift'
 Possible fix: add an instance declaration for (Lift Type)

 I have imported Language.Haskell.TH.Instances.
 Is there anything else I have to do ?

 Regards,
 Jose

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct way to catch all exceptions

2013-07-12 Thread Michael Snoyman
When I implemented this stuff yesterday, I included `Deep` variants for
each function which used NFData. I'm debating whether I think the right
recommendation is to, by default, use the `async`/NFData versions of catch,
handle, and try, or to have them as separate functions.

I wrote up the blog post, both on the Yesod blog[1] and School of
Haskell[2]. The latter's a bit easier to use since it includes active code
snippets.

[1] http://www.yesodweb.com/blog/2013/07/catching-all-exceptions
[2]
https://www.fpcomplete.com/user/snoyberg/general-haskell/exceptions/catching-all-exceptions


On Fri, Jul 12, 2013 at 4:03 AM, John Lato jwl...@gmail.com wrote:

 I agree that how the exception was thrown is more interesting than the
 type.  I feel like there should be a way to express the necessary
 information via the type system, but I'm not convinced it's easy (or even
 possible).

 Another issue to be aware of is that exceptions can be thrown from pure
 code, so if you don't fully evaluate your return value an exception can be
 thrown later, outside the catch block.  In practice this usually means an
 NFData constraint, or some other constraint for which you can guarantee
 evaluation.

 In the past I've been pretty vocal about my opposition to exceptions.
  It's still my opinion that they do not make it easy to reason about
 exceptional conditions.  Regardless, as Haskell has them and uses them, I'd
 like to see improvements if possible.  So if anyone is exploring the design
 space, I'd be willing to participate.


 On Fri, Jul 12, 2013 at 12:57 AM, Michael Snoyman mich...@snoyman.comwrote:




 On Thu, Jul 11, 2013 at 6:07 PM, Felipe Almeida Lessa 
 felipe.le...@gmail.com wrote:

 On Thu, Jul 11, 2013 at 10:56 AM, Michael Snoyman mich...@snoyman.com
 wrote:
  The only
  approach that handles the situation correctly is John's separate thread
  approach (tryAll3).

 I think you meant tryAll2 here.  Got me confused for some time =).

 Cheers,

 --
 Felipe.


 Doh, yes, I did, thanks for the clarification.

 After playing around with this a bit, I was able to get an implementation
 of try, catch, and handle which work for any non-async exception, in monad
 transformers which are instances of MonadBaseControl (from monad-control).
 I'll try to write up my thoughts in something more coherent, likely a blog
 post.

 Michael



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct way to catch all exceptions

2013-07-11 Thread Michael Snoyman
On Thu, Jul 11, 2013 at 3:44 AM, John Lato jwl...@gmail.com wrote:

 Hi Michael,

 I don't think those are particularly niche cases, but I still think this
 is a bad approach to solving the problem.  My reply to Erik explicitly
 covers the worker thread case, and for running arbitrary user code (as in
 your top line) it's even simpler: just fork a new thread for the user code.
  You can use the async package or similar to wrap this, so it doesn't even
 add any LOCs.

 What I think is particularly niche is not being able to afford the cost of
 another fork, but I strongly doubt that's the case for Warp.

 The reason I think this is a bad design is twofold: first maintaining a
 list of exclusions like this (whether it's consolidated in a function or
 part of the exception instance) seems rather error-prone and increases the
 maintenance burden for very little benefit IMHO.

 Besides, it's still not correct.  What if you're running arbitrary user
 code that forks its own threads?  Then that code's main thread could get a
 BlockedIndefinitelyOnMVar exception that really shouldn't escape the user
 code, but with this approach it'll kill your worker thread anyway.  Or even
 malicious/brain-damaged code that does myThreadId = killThread?

 I like Ertugrul's suggestion though.  It wouldn't fix this issue, but it
 would add a lot more flexibility to exceptions.



I've spent some time thinking about this, and I'm beginning to think the
separate thread approach is in fact the right way to solve this. I think
there's really an important distinction to be made that we've all gotten
close to, but not specifically identified: the exception type itself isn't
really what we're interested, it's how that exception was thrown which is
interesting. I've put together an interesting demonstration[1].

The test I've created is that a worker thread is spawned. In the worker
thread, we run an action and wrap it in a tryAll function. Meanwhile, in
the main thread, we try to read a file and, when it fails, throw that
IOException to the worker thread. In this case, we want the worker thread
to halt execution immediately. With the naive try implementation (tryAll1)
this will clearly not happen, since the async exception will be caught as
if the subaction itself threw the exception. The more intelligent tryAll3
does the same thing, since it is viewing the thrown exception as
synchronous based on its type, when in reality it was thrown as an async
exception.[2] The only approach that handles the situation correctly is
John's separate thread approach (tryAll3). The reason is that it is
properly differentiating based on how the exception was thrown.

I'm going to play around with this a bit more; in particular, I want to see
how this works with monad transformer stacks. But I at least feel like I
have a slightly better conceptual grasp on what's going on here. Thanks for
pointing this out John.

Michael

[1] https://gist.github.com/snoyberg/5975592
[2] You could also do the reverse: thrown an async exception synchronously,
and similarly get misleading results.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct way to catch all exceptions

2013-07-11 Thread Michael Snoyman
On Thu, Jul 11, 2013 at 6:07 PM, Felipe Almeida Lessa 
felipe.le...@gmail.com wrote:

 On Thu, Jul 11, 2013 at 10:56 AM, Michael Snoyman mich...@snoyman.com
 wrote:
  The only
  approach that handles the situation correctly is John's separate thread
  approach (tryAll3).

 I think you meant tryAll2 here.  Got me confused for some time =).

 Cheers,

 --
 Felipe.


Doh, yes, I did, thanks for the clarification.

After playing around with this a bit, I was able to get an implementation
of try, catch, and handle which work for any non-async exception, in monad
transformers which are instances of MonadBaseControl (from monad-control).
I'll try to write up my thoughts in something more coherent, likely a blog
post.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Correct way to catch all exceptions

2013-07-10 Thread Michael Snoyman
There's a pattern that arises fairly often: catching every exception thrown
by code. The naive approach is to do something like:

result - try someCode
case result of
Left (e :: SomeException) - putStrLn $ It failed:  ++ show e
Right realValue - useRealValue

This seems perfectly valid, except that it catches a number of exceptions
which seemingly should *not* be caught. In particular, it catches the async
exceptions used by both killThread and timeout.

I think it's fair to say that there's not going to be a single function
that solves all cases correctly, but it is a common enough situation that
people need to write code that resumes work in the case of an exception
that I think we need to either have some guidelines for the right approach
here, or perhaps even a utility function along the lines of:

shouldBeCaught :: SomeException - Bool

One first stab at such a function would be to return `False` for
AsyncException and Timeout, and `True` for everything else, but I'm not
convinced that this is sufficient. Are there any thoughts on the right
approach to take here?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Correct way to catch all exceptions

2013-07-10 Thread Michael Snoyman
On Wed, Jul 10, 2013 at 1:01 PM, John Lato jwl...@gmail.com wrote:

 On Wed, Jul 10, 2013 at 5:02 PM, Erik Hesselink hessel...@gmail.comwrote:

 On Wed, Jul 10, 2013 at 10:39 AM, John Lato jwl...@gmail.com wrote:
  I think 'shouldBeCaught' is more often than not the wrong thing.  A
  whitelist of exceptions you're prepared to handle makes much more sense
 than
  excluding certain operations.  Some common whitelists, e.g. filesystem
  exceptions or network exceptions, might be useful to have.

 You'd think that, but there are common use cases. For example, if you
 have a queue of work items, and a thread (or threads) processing them,
 it is useful to catch all exceptions of these threads. You can then
 log the exception, remove the item from the queue and put it in some
 error bucket, and continue on to the next item. The same goes for e.g.
 socket listening threads etc.

 The thing here is that you are *not* actually handling the specific
 exception, but instead failing gracefully. But you still want to be
 able to kill the worker threads, and you don't want to handle
 exceptions that you cannot recover from even by moving on to the next
 work item.


 I think that's a particularly niche use case.  We have some similar code,
 and our approach is to have the thread re-throw (or terminate) after
 logging the exception.  There's a separate thread that monitors the thread
 pool, and when threads die new ones are spawned to take their place (unless
 the thread pool is shutting down, of course).  Spawning a new thread only
 happens on an exception and it's cheap anyway, so there's no performance
 issue.

 As Haskell currently stands trying to sort out thread-control and
 fatal-for-real exceptions from other exceptions seems rather fiddly,
 unreliable, and prone to change between versions, so I think it's best
 avoided.  If there were a standard library function to do it I might use
 it, but I wouldn't want to maintain it.


Maybe I'm just always working on niche cases then, because I run into this
problem fairly regularly. Almost any time you want to write a library that
will run code it doesn't entirely trust, this situation arises. Examples
include:

   - Writing a web server (like Warp) which can run arbitrary user code.
   Warp must fail gracefully if the user code throws an exception, without
   bringing down the entire server thread.
   - Writing some kind of batch processing job which uses any library which
   may throw an exception. A white list approach would not be sufficient here,
   since we want to be certain that any custom exception types have been
   caught.
   - A system which uses worker threads to do much of its work. You want to
   make certain the worker threads don't unexpectedly die because some
   exception was thrown that you were not aware could be thrown. I use this
   technique extensively in Keter, and in fact some work I'm doing on that
   code base now is what triggered this email.

I think that, overall, Ertugrul's suggestion is probably the right one: we
should be including richer information in the `Exception` typeclass so that
there's no guessing involved, and any custom exception types can explicitly
state what their recovery preference is. In the meanwhile, I think we could
get pretty far by hard-coding some rules about standard exception types,
and making an assumption about all custom exception types (e.g., they *
should* be caught by a catch all exceptions call).

If we combine these two ideas, we could have a new package on Hackage which
defines the right set of tags and provides a `tagsOf` function which works
on any instance of Exception, which uses the assumptions I mentioned in the
previous paragraph. If it's then decided that this is generally useful
enough to be included in the Exception typeclass, we have a straightforward
migration path:

   1. Add the new method to the Exception typeclass, with a default
   implementation that conforms with our assumptions.
   2. For any of the special standard exception types (e.g.,
   AsyncException), override that default implementation.
   3. Modify the external package to simply re-export the new method when
   using newer versions of base, using conditional compilation.
   4. Any code written against that external package would work with both
   current and future versions of base.
   5. The only incompatibility would be if someone writes code which
   overrides the typeclass method; that code would only work with newer bases,
   not current ones.

Any thoughts on this? I'm not sure exactly what would be the right method
to add to the Exception typeclass, but if we can come to consensus on that
and there are no major objections to my separate package proposal, I think
this would be something moving forward on, including a library proposal.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tangential request...

2013-06-23 Thread Michael Orlitzky
On 06/22/2013 11:09 PM, Evan Laforge wrote:
 You're overthinking it.  I just sent a whole screen.
 

You're probably right; done.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] tangential request...

2013-06-22 Thread Michael Orlitzky
On 06/22/2013 01:28 PM, Mark Lentczner wrote:
 3) Do not resize the terminal window

and

 5) Take a screen shot of the whole terminal window

are mutually exclusive?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: haskell-names-0.1

2013-06-21 Thread Michael Sloan
Roman: Awesome!  I'm really glad that this is ready for use!

Andrew: I have a tool that's a step towards doing this.  Instead of using
haskell suite, it uses ghci via hint to query module exports, and then uses
TH to reify them.  This has the benefit of supporting everything that GHC
supports, whereas there are currently some cases that haskell-src-exts
can't parse.  There's also the issue of supporting declarations generated
by template haskell.

Here's an example of diffing some of its output:

https://github.com/mgsloan/api-compat/blob/master/examples/template-haskell.api.diff

The main reason I haven't released the tool is that I was intending to do
structural diffs / handle renaming, so it's somewhat unfinished.  However I
believe it's reasonably usable: instead, the output is just structured in a
way that's reasonably amenable to diffing.

-Michael


On Thu, Jun 20, 2013 at 11:12 PM, Andrew Cowie 
and...@operationaldynamics.com wrote:

 On Thu, 2013-06-20 at 18:13 +0300, Roman Cheplyaka wrote:
  Namely, it can do the following:
 
  *   for a module, compute its interface, i.e. the set of entities
  exported by the module, together with their original names.
 
  *   for each name in the module, figure out what it refers to — whether
  it's bound locally (say, by a where clause) or globally (and then
  give its origin).

 Is this a step toward being able to automatically derive an API version
 number [in the SO version sense of the word; ie, has a change happened
 requiring a version bump?]

 AfC
 Sydney


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Platform 2013.2.0.0 64bit.pkg

2013-06-14 Thread Michael Orlitzky
On 06/13/2013 02:13 AM, Richard A. O'Keefe wrote:
 My original problem was that I wanted to load a particular set of
 packages using 'cabal install'.  It didn't work (cabal install issues)
 and while the maintainer reacted promptly and helpfully, cabal
 kept on trying to install the wrong version.
 
 Part of the problem was that blasting away ~/.cabal and ~/Library/Haskell
 wasn't enough:  it's necessary to blast away ~/.ghc as well (which I had
 forgotten existed and of course never saw).
 
 * It would be handy if 'uninstall-hs' had an option, say
 * uninstall-hs --user
 * so that a user could in one step make it as if they had never
 * used the Haskell Platform.
 
 (Sigh.  Changes to the GHC command line interface since 7.0 have
 broken one of the packages I used to have installed, and the
 maintainer's e-mail address doesn't work any more.  And sometimes
 it seems as if every time I install anything with cabal something
 else breaks.)
 
 PS. Earlier today cabal gave me some confusing messages which
 turned out to mean 'GSL isn't installed'.  Non-Haskell dependencies
 could be explained a little more clearly.
 

This doesn't offer an immediate solution to your problem, but as of
right now, the best set of blessed Haskell packages can be found in
the gentoo-haskell[1] overlay.

You can use Gentoo's portage package manager and the overlay on many
operating systems (OSX included) via the gentoo-prefix[2] project, which
builds you an entire Gentoo system in e.g. ~/prefix. It's then easy to
get packages added to the overlay, and tested against the rest of the
packages in Gentoo (which is what everything will be compiled against).

There's also support in portage for automatically rebuilding packages
whose dependencies have been broken by an upgrade, which prevents a huge
amount of breakage. Some good docs on getting a Haskell system up and
running on prefix would be a big help for anyone who wants an ecosystem
that will work for a few years.

Right now the documentation for prefix isn't great, but as I understand
it the project docs are going to be moved to the Gentoo wiki, and us
mere mortals will be able to update the instructions. Right now you need
CVS access, and nobody knows how the documentation XML nonsense works.

Burcin Erocal has an interesting project called lmonade[3] which
simplifies this for other projects, so it doesn't need to be painful.


[1] https://github.com/gentoo-haskell/
[2] http://www.gentoo.org/proj/en/gentoo-alt/prefix/
[3] http://www.lmona.de/


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Angel 0.4.2

2013-06-12 Thread Michael Xavier
I'm pleased to announce the release of Angel 0.4.2 and that I have
officially taken over maintainership of this project. Thanks to Jamie
Turner for starting such a great project and allowing me to take over this
project.

angel is a daemon that runs and monitors other processes. It is similar to
djb's daemontools or the Ruby project god. It's goals are to keep a set of
services running, and to facilitate the easy configuration and restart of
those services.

0.4.1 added the count option to the config to control the number of
instances of a particular process to start.

0.4.2 added the pidfile option to specify the path of a pidfile to
generate when monitoring processes.

-- 
Michael Xavier
http://www.michaelxavier.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: hledger 0.21

2013-06-01 Thread Simon Michael
I'm pleased to announce hledger and hledger-web 0.21!

hledger is a command-line tool and haskell library for tracking
financial transactions, which are stored in a human-readable plain
text format. In addition to reporting, it can also help you record new
transactions, or convert CSV data from your bank. Add-on packages
include hledger-web (providing a web interface), hledger-irr and
hledger-interest.

hledger is inspired by and compatible with John Wiegley's Ledger. For
more, see http://hledger.org .

Install it:

cabal update; cabal install hledger [hledger-web]

For more installation help, see
http://hledger.org/MANUAL.html#installing .
Or, sponsor a ready-to-run binary for your platform:
http://hledger.org/DOWNLOAD.html .

Release notes (http://hledger.org/NEWS.html#hledger-0.21):

**Bugs fixed:**

  - parsing: don't fail when a csv amount has trailing whitespace (fixes
  #113)
  - web: don't show prices in the accounts sidebar (fixes #114)
  - web: show one line per commodity in charts. Needs more polish, but
  fixes #109.
  - web: bump yesod-platform dependency to avoid a cabal install failure

**Journal reading:**

  - balance assertions are now checked after reading a journal

**web command:**

  - web: support/require yesod 1.2
  - web: show zero-balance accounts in the sidebar (fixes #106)
  - web: use nicer select2 autocomplete widgets in the add form

**Documentation and infrastructure:**

  - add basic cabal test suites for hledger-lib and hledger


Release contributors:

- Xinruo Sun enhanced the hledger-web add form
- Clint Adams added cabal test suites
- Jeff Richards did hledger-web cleanup
- Peter Simons provided the build bot

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: new bridge! (prelude-prime)

2013-05-23 Thread Michael Snoyman
On Thu, May 23, 2013 at 11:38 AM, Anton Kholomiov anton.kholom...@gmail.com
 wrote:

 I wish it was possible to use an extension

 CustomPrelude = Prelude.Prime

 In the cabal file



I'm not necessarily opposed to this idea, but I'd like to point out that it
can have a negative impact on readability of an individual module, since
you can't tell which Prelude is being used. This is the same argument used
for putting LANGUAGE pragmas in a modules instead of listing them in a
cabal file. I think in the case of an alternate Prelude, the argument is
stronger, since language extensions often don't change the meaning of code,
but instead allow previously invalid code to be valid.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream processing

2013-05-11 Thread Michael Snoyman
It's quite old at this point, but you may be interested in reading the
initial motivations for creating conduit when the iteratee pattern (and
enumerator library in particular) already existed:

https://github.com/snoyberg/conduit/blob/master/README.md#general-goal

I would say the only real component missing from your list is being able to
structure significantly more complicated control flows, such as the use
case of combining a web server and web client into a web proxy. That was
probably the example which finally pushed me to start thinking seriously
about an enumerator replacement. In conduit, this use case is addressed by
connect-and-resume, which essentially allows you to escape the inversion of
control normally introduced by the conduit pattern.


On Fri, May 10, 2013 at 5:56 PM, Ertugrul Söylemez e...@ertes.de wrote:

 Hello everybody,

 I'm trying to formulate the stream processing problem, which doesn't
 seem to be solved fully by the currently existing patterns.  I'm
 experimenting with a new idea, but I want to make sure that I don't miss
 any defining features of the problem, so here is my list.  A stream
 processing abstraction should:

   * have a categorically proven design (solved by iteratees, pipes),

   * be composable (solved by all of them),

   * be reasonably easy to understand and work with (solved by conduit,
 pipes),

   * support leftovers (solved by conduit and to some degree by
 iteratees),

   * be reliable in the presence of async exceptions (solved by conduit,
 pipes-safe),

   * hold on to resources only as long as necessary (solved by conduit
 and to some degree by pipes-safe),

   * ideally also allow upstream communication (solved by pipes and to
 some degree by conduit).

   * be fast (solved by all of them).

 Anything else you would put in that list?


 Greets,
 Ertugrul

 --
 Not to be or to be and (not to be or to be and (not to be or to be and
 (not to be or to be and ... that is the list monad.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Backward compatibility

2013-05-03 Thread Michael Sloan
Dependency breakage is certainly an unavoidable problem.  However, I think
Haskell is also in a much better position for having a technical solution
to the frustration of breakages.

Barring issues with changing datatypes / class instances, we can already
express many of the API changes you'd want to make to some library [1].
 Now, no one actually does what this proposal suggests - it's a lot of
work, and it doesn't work in general.  However, the fact that Haskell makes
something like this seem reasonable is heartening.

Of course, even if we had good tools that automatically refactored code to
use new APIs, it wouldn't be possible for any non-superficial changes.
 Even so, if a good enough refactoring tool existed, and it was popular
with both authors and users, a lot of the annoyance of dependency breakages
could be removed.

-Michael

[1]
http://haskellwiki.gitit.net/The%20Monad.Reader/Issue2/EternalCompatibilityInTheory



On Fri, May 3, 2013 at 2:04 AM, Ertugrul Söylemez e...@ertes.de wrote:

 Raphael Gaschignard dasur...@gmail.com wrote:

  I'm pretty sure most of us have experienced some issue with
  dependencies breaking , and its probably the most frustrating problem
  we can have have in any language. It's hard not to take this all a bit
  personally. Maybe if we think more about how to solve this (getting
  people to maintain their stuff, for example) we can make the world a
  better place instead of bickering about issues that are more or less
  language-agnostic really.

 The problem can't be solved technically.  It's a human problem after all
 and it's amplified by the experimentalism in this community.  I think
 the best we can do is to acknowledge its existence, which places us way
 ahead of mainstream programming communities.

 We don't pretend that type X in lib-0.1.0 is the same as type X in
 lib-0.2.0.  What we need to work on is the ability to actually combine
 multiple versions of the same package conveniently, i.e. we shouldn't
 view this combination as an error.


 Greets,
 Ertugrul

 --
 Not to be or to be and (not to be or to be and (not to be or to be and
 (not to be or to be and ... that is the list monad.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: hledger 0.20

2013-05-01 Thread Simon Michael
I'm pleased to announce hledger and hledger-web 0.20!

hledger is a command-line tool and haskell library for tracking
financial transactions, which are stored in a human-readable plain
text format. In addition to reporting, it can also help you record new
transactions, or convert CSV data from your bank. Add-on packages
include hledger-web (providing a web interface), hledger-irr and 
hledger-interest.

hledger is inspired by and compatible with John Wiegley's Ledger. For
more, see http://hledger.org .

Install it:

# cabal update; cabal install hledger-web

For more installation help, see http://hledger.org/MANUAL.html#installing .
Or, sponsor a ready-to-run binary for your platform: 
http://hledger.org/DOWNLOAD.html .

Release notes (http://hledger.org/NEWS.html#hledger-0.20):

**Bugs fixed:**

 * balance: a 0.19 regression which showed wrong total balance with `--flat` 
has been fixed (#94)
 * register: when `--date2` is used, the register is now sorted by the 
secondary date
 * web: some missing static  template files have been added to the package, 
fixing cabal-dev and hackage builds (#97, #98)
 * web: some hardcoded static urls have been fixed
 * Dependencies and code have been updated to support the latest
   libraries and GHC versions.  For now, hledger requires GHC 7.2+
   and hledger-web requires GHC 7.4+.

**Journal reading:**

 - DOS-style line-endings are now also supported in journal and rules files.
 - `!` is now accepted in the status field as well as `*`, like ledger
 - The *actual date* and *effective date* terminology has changed to *primary 
date* and *secondary date*.
   Use `--date2` to select the secondary date for reports. (`--aux-date` or 
`--effective` are also accepted
   for ledger and backwards compatibility).
 - Per-posting dates are supported, using hledger tags or ledger's posting date 
syntax
 - Comment and tag handling has been improved

**CSV reading:**

 - CSV conversion rules have a simpler, more flexible 
[syntax](MANUAL.html#csv-files).
   Existing rules files will need to be updated manually:
   - the filename is now `FILE.csv.rules` instead of `FILE.rules`
   - `FIELD-field N` is now `FIELD %N+1` (or set them all at once with a 
`fields` rule)
   - `base-currency` is now `currency`
   - `base-account` is now `account1`
   - account-assigning rules:
 add `if` before the list of regexps,
 add indented `account2 ` before the account name
 - parenthesised amounts are parsed as negative

**Querying:**

 - Use `code:` to match the transaction code (check number) field
 - Use `amt:` followed by ``, `=` or `` and a number N to match
   amounts by magnitude. Eg `amt:0` or `amt:=100`. This works only
   with single-commodity amounts (multi-commodity amounts are
   always matched).
 - `tag:` can now match (exact, case sensitive) tag values. Eg `tag:TAG=REGEXP`.

**add comand:**

 - Transaction codes and comments (which may contain tags) can now be entered, 
following a date or amount respectively. (#45)
 - The current entry may be restarted by entering `` at any prompt. (#47)
 - Entries are displayed and confirmed before they are written to the journal.
 - Default values may be specified for the first entry by providing them as 
command line arguments.
 - Miscellaneous UI cleanups

**register command:**

 - The `--related`/`-r` flag shows the other postings in each transaction, like 
ledger.
 - The `--width`/`-w` option increases or sets the output width.

**web command:**

 - The web command now also starts a browser, and auto-exits when unused, by 
default (local ui mode).
   With `--server`, it keeps running and logs requests to the console (server 
mode).
 - Bootstrap is now used for styling and layout
 - A favicon is served
 - The search field is wider
 - yesod devel is now supported; it uses `$LEDGER_FILE` or `~/.hledger.journal`
 - the `blaze_html_0_5` build flag has been reversed and renamed to 
`blaze_html_0_4`

**Add-ons:**

 - The hledger-interest and hledger-irr commands have been released/updated.
 - hledger-chart and hledger-vty remain unmaintained and deprecated.

**Documentation and infrastructure:**

 - The hledger docs and website have been reorganised and updated
 - Manuals for past releases are provided as well as the latest dev version
 - hledger has moved from darcs and darcs hub to git and github (!)
 - The bug tracker has moved from google code to github
 - Feature requests and project planning are now managed on trello
 - A build bot builds against multiple GHC versions on each commit

Release contributors:

- Sascha Welter commissioned register enhancements (--related and --width)
- David Patrick contributed a bounty for add enhancements
- Joachim Breitner added support for ! in status field
- Xinruo Sun provided hledger-web build fixes
- Peter Simons provided hledger-web build fixes, and a build bot
- Marko Kocić provided hledger-web fixes


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Fwd: Google Summer of Code, news

2013-04-29 Thread Michael Snoyman
I'll throw in that Marcos mentioned this very issue to me about his code
before showing it to me. It was written the way it was for the requirements
of his course. He volunteered to translate the comments for me, but I told
him it wasn't necessary in order to get an initial feel for the code (I
also read Spanish somewhat).


On Mon, Apr 29, 2013 at 5:25 PM, Kristopher Micinski krismicin...@gmail.com
 wrote:

 I second that advice!  I can technically read Spanish, but I find the
 complexity of the language barrier compounded with trying to
 understand the code becomes more confusing than I'd prefer :-).

 Kris


 On Sun, Apr 28, 2013 at 2:19 PM, Mateusz Kowalczyk
 fuuze...@fuuzetsu.co.uk wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 28/04/13 18:37, Marcos Pividori wrote:
  Greetings,
 
  I am a Computer Science student from Argentina. I am interested in
  working this summer in a project related to Haskell for the Google
  Summer of Code. I have been discussing my idea with Michael Snoyman
  in order to have a clearer idea. Now, I would like to know the
  community interest in this project.
 
  I want to develop a server-side library in Haskell for sending
  push notifications to devices running different OS, such as
  Android, iOS, Windows Phone, BlackBerry, and so on.
 
  To pass a subject, I have recently worked with Yesod (a Web
  Framework based in Haskell) developing a server to comunicate with
  Android-powered devices through Google Cloud Messaging.  (It is
  available: https://github.com/MarcosPividori/Yesod-server-for-GCM
  )
 
  To develop this project, I have read a lot about this service and
  Yesod libraries, and I developed two programs, a server written in
  Haskell and an Android application for mobile phones. Also, I
  developed an EDSL to write programs which exchange information with
  the devices.
 
  I would be grateful if you could give me your opinion about this
  project and the proposal I am starting to write.
 
  While I don't have anything to contribute to the project idea itself,
  I had a look at your code on GitHub and I'd like to recommend that in
  the future, when writing code that will get published and used by
  others, you use English. This especially applies to projects aimed to
  benefit a wider community such as this one. You seem to be mixing the
  two together which doesn't help readability either.
 
 
  - --
  Mateusz K.
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v2.0.19 (GNU/Linux)
 
  iQIcBAEBAgAGBQJRfWhMAAoJEM1mucMq2pqXJH8P/RqWzAHFlbkLPRSzRK3w+Us2
  I+VDOGxF6627RwWSX3P5gY84t8lhGQZ8M9voGptKnNE+2xmArtqQIn6a9Jj01o3n
  PcV6SuacG5qNpHawQdVXSFoIGkQ9tNhSDu4HYgXTRQD1tptxd31pKi9gN2EE6ieA
  HgdR6g688edLjdfbGj18CDNnFxIJhzsFYoqaNgBZB4ZpcCisQzdkwGELx8c3+fa2
  deSbsvA808q/xPiFZ6DDCOF0aXQmvQwtVdCdhyrn4BPMhGF2da9zqcy3VNPHWMd5
  VNnw4USY1vVdsTY6fKts5IyuNhIl7WTGypNUbIMl3gCpH1RWgO8FbKZQmyvosPPv
  xCA7qpPVkc8sg2qSBiQyJ66upg5503bCoijNYxGmCAaFm83bJdUgwrhnOBoyguPC
  S86g6zNUrbV6oQDAPy3unOKLlCGJhlQgEx9dbXPDCQiqWeUqhVipqxf0WHDcTPMW
  prjWzqZTJkm1kq11G4Ues4sXpJDzG0syWroaO4ah0A6aCZzuFFX8NqcQvEufzRCS
  ydOF9Qgr5nuVcBndjekYw9uxA6UtRDKoyvmvr0y5TDfk7w42dC/qPOhK5xkndz7u
  pjXnIGanqBur1B5Fw5jfilzc5eViOYDGGtZqz4/mKV6lfQclTljTVI461HrSQW+H
  SVdK4oqvGU0ZCD94BBHv
  =+KLZ
  -END PGP SIGNATURE-
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell/functional-language people in Raleigh/Durham, North Carolina, USA?

2013-04-16 Thread Michael Alan Dorman
Taylor Hedberg t...@tmh.cc writes:
 Benjamin Redelings, Tue 2013-04-16 @ 16:25:26-0400:
 I'm curious if there are any other people on this list who are
 interested in Haskell and functional languages in the Triangle Area,
 in North Carolina?

 I am!

 Funny you should ask, actually, as I was just wondering the same thing
 myself earlier today.

Me three!

Mike.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to design an network client with user program.

2013-04-10 Thread Michael Snoyman
It doesn't seem like you're trying to perform multiple actions
simultaneously. For example, you don't need to be able to read from the
server and send data back at the same time. Instead, you'll have a single
thread of execution. Am I right?

If so, it seems like the simplest thing would be for you to allow users to
write something like:

Conduit MsgFromServer m MsgToServer

Assuming you had conduits to convert an incoming byte stream to a stream of
MsgFromServer and the equivalent for sending, you'd end up with something
like:

appSource appData $$ toMsgFromServer =$ clientSuppliedConduit =$
fromMsgToServer =$ appSink appData

Michael


On Tue, Apr 9, 2013 at 1:09 PM, Alexander V Vershilov 
alexander.vershi...@gmail.com wrote:


 Hello.

 I have next problem: I have a network client that connects to server,
 listens for messages and generate responces. So the control flow can be
 represended as:

 server -- input - {generate output} - output

 Output can be generated using default implementation or can overriden by
 user.

 The main difficulty appeares when I need to add a user program on the top
 of this logic,
 i.e. from user-side I want to have dsl:smth like

 withClient $ do
x - send message
waitFor x
timeout 500
forever $ sendRandomMessage

 i.e. an ability to send messages, waiting for some event (message to
 come), waiting for
 timeout.

 The question is how to define such logic without a big overhead. I see a
 solution using conduit, it's possible to create 3 processes: listener,
 user, sender.

  + user +
  ||
 -input - listener +-+ sender -

 and use TQueue or TChan to send messages between them, however there can
 be another possible solutions, that uses less resources, or another design.


 --
 Alexander

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to design an network client with user program.

2013-04-10 Thread Michael Snoyman
On Wed, Apr 10, 2013 at 2:08 PM, Alexander V Vershilov 
alexander.vershi...@gmail.com wrote:


 On 10 April 2013 14:56, Michael Snoyman mich...@snoyman.com wrote:

 It doesn't seem like you're trying to perform multiple actions
 simultaneously. For example, you don't need to be able to read from the
 server and send data back at the same time. Instead, you'll have a single
 thread of execution. Am I right?


 Not  exaclty, user code is not only SeverMessage driven but can generate
 messages and works on it's own (time-events, or some external events).
 For example user code may generate random messages even there is no
 message from server, (i.e. wait for some
 timeout and then feed sender with message), or do some long running
 events, (e.g. wait for 5 minutes), in both
 of those cases one threaded pipeline is broken.


Then some kind of TQueue or TChan approach is going to be necessary.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Conduit] weird action of leftover.

2013-04-08 Thread Michael Snoyman
It's a bug in your implementation of takeLine I believe. It doesn't take
into account that lines can span multiple chunks. When you call takeLine
the first time, you get L1\n. leftover puts a chunk with exactly those
contents back. When you call takeLine the second time, it gets the chunk
L1\n, and your splitAt gives you back L1\n and . The  is then
leftover, and the next call to takeLine gets it.

Your takeLine needs to include logic saying there's no newline in this
chunk at all, let's get the next chunk and try that. You can look at the
source to lines[1] for an example of the concept.

Michael

[1]
http://haddocks.fpcomplete.com/fp/7.4.2/20130313-1/conduit/src/Data-Conduit-Binary.html#lines


On Mon, Apr 8, 2013 at 8:44 AM, Magicloud Magiclouds 
magicloud.magiclo...@gmail.com wrote:

 Say I have code like below. If I comment the leftover in main, I got (Just
 L1\n, Just L2\n, Just L3\n, Just L4\n). But if I did not comment
 the leftover, then I got (Just L1\n, Just L1\n, Just , Just L2\n).
 Why is not it (Just L1\n, Just L1\n, Just L2\n, Just L3\n)?

 takeLine :: (Monad m) = Consumer ByteString m (Maybe ByteString)
 takeLine = do
   mBS - await
   case mBS of
 Nothing - return Nothing
 Just bs -
   case DBS.elemIndex _lf bs of
 Nothing - return $ Just bs
 Just i - do
   let (l, ls) = DBS.splitAt (i + 1) bs
   leftover ls
   return $ Just l

 main = do
   m - runResourceT $ sourceFile test.simple $$ (do
 a - takeLine
 leftover $ fromJust a
 b - takeLine
 c - takeLine
 d - takeLine
 return (a, b, c, d))
   print m

 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock

2013-04-08 Thread Michael Snoyman
It supports ```language blocks, but not autolink detection. I have not
fully documented which features are supported. I also haven't done any
performance analysis versus other tools, simply because my goal is in no
way high efficiency. It is fast enough for my use cases, and I don't intend
to spend significant time optimizing unless a problematic level of
inefficiency is discovered. If anyone else wants to put together
benchmarks, I'll be happy to lend some guidance.


On Mon, Apr 8, 2013 at 12:50 PM, Niklas Hambüchen m...@nh2.me wrote:

 Could you elaborate a bit on which markdown features you support (or
 even better: write it into your module haddocks)?

 Thinks like
 - autolink detection
 - ```language blocks?

 Also, you build on performance-oriented libraries - it would be cool if
 you could make a small benchmark comparing with the standard
 C/Python/Ruby parser implementations; AFAIK there is a standard Markdown
 test suite that this could run against.

 Concerning the project proposal:

 I especially find the last feature useful for programming documentation,
 and would love to have them in a potential haddock succesor. I was also
 pleasantly surprised that pandoc seems to handle all of this (even with
 code syntax highlighting).

 On 05/04/13 02:10, Michael Snoyman wrote:
  In case it can be useful in any way for this project, my markdown
  package[1] is certainly available for scavenging, though we'd likely
  want to refactor it to not use conduit (I can't imagine conduit being a
  good dependency for Haddock).
 
  [1] http://hackage.haskell.org/package/markdown

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Conduit] weird action of leftover.

2013-04-08 Thread Michael Snoyman
Yes, that's a fair explanation.


On Tue, Apr 9, 2013 at 7:48 AM, Magicloud Magiclouds 
magicloud.magiclo...@gmail.com wrote:

 Thank you for the reply. I've learnt the code of lines. So it is because
 how ByteString works, that the conduit is not a stream of bytes, but
 chunks, right?


 On Tue, Apr 9, 2013 at 12:12 PM, Michael Snoyman mich...@snoyman.comwrote:

 It's a bug in your implementation of takeLine I believe. It doesn't take
 into account that lines can span multiple chunks. When you call takeLine
 the first time, you get L1\n. leftover puts a chunk with exactly those
 contents back. When you call takeLine the second time, it gets the chunk
 L1\n, and your splitAt gives you back L1\n and . The  is then
 leftover, and the next call to takeLine gets it.

 Your takeLine needs to include logic saying there's no newline in this
 chunk at all, let's get the next chunk and try that. You can look at the
 source to lines[1] for an example of the concept.

 Michael

 [1]
 http://haddocks.fpcomplete.com/fp/7.4.2/20130313-1/conduit/src/Data-Conduit-Binary.html#lines


 On Mon, Apr 8, 2013 at 8:44 AM, Magicloud Magiclouds 
 magicloud.magiclo...@gmail.com wrote:

 Say I have code like below. If I comment the leftover in main, I got
 (Just L1\n, Just L2\n, Just L3\n, Just L4\n). But if I did not
 comment the leftover, then I got (Just L1\n, Just L1\n, Just , Just
 L2\n).
 Why is not it (Just L1\n, Just L1\n, Just L2\n, Just L3\n)?

 takeLine :: (Monad m) = Consumer ByteString m (Maybe ByteString)
 takeLine = do
   mBS - await
   case mBS of
 Nothing - return Nothing
 Just bs -
   case DBS.elemIndex _lf bs of
 Nothing - return $ Just bs
 Just i - do
   let (l, ls) = DBS.splitAt (i + 1) bs
   leftover ls
   return $ Just l

 main = do
   m - runResourceT $ sourceFile test.simple $$ (do
 a - takeLine
 leftover $ fromJust a
 b - takeLine
 c - takeLine
 d - takeLine
 return (a, b, c, d))
   print m

 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





 --
 竹密岂妨流水过
 山高哪阻野云飞

 And for G+, please use magiclouds#gmail.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock

2013-04-04 Thread Michael Snoyman
On Thu, Apr 4, 2013 at 7:49 PM, Johan Tibell johan.tib...@gmail.com wrote:

 Hi all,

 Haddock's current markup language leaves something to be desired once
 you want to write more serious documentation (e.g. several paragraphs
 of introductory text at the top of the module doc). Several features
 are lacking (bold text, links that render as text instead of URLs,
 inline HTML).

 I suggest that we implement an alternative haddock syntax that's a
 superset of Markdown. It's a superset in the sense that we still want
 to support linkifying Haskell identifiers, etc. Modules that want to
 use the new syntax (which will probably be incompatible with the
 current syntax) can set:

 {-# HADDOCK Markdown #-}

 on top of the source file.

 Ticket: http://trac.haskell.org/haddock/ticket/244

 -- Johan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


+1

In case it can be useful in any way for this project, my markdown
package[1] is certainly available for scavenging, though we'd likely want
to refactor it to not use conduit (I can't imagine conduit being a good
dependency for Haddock).

[1] http://hackage.haskell.org/package/markdown
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Make a DSL serializable

2013-03-24 Thread Michael Better
Isn't this similar to the problem Cloud Haskell had to solve to send code
to another process to run?

Mike
On Mar 24, 2013 5:06 PM, Brandon Allbery allber...@gmail.com wrote:

 On Sun, Mar 24, 2013 at 5:44 PM, Corentin Dupont 
 corentin.dup...@gmail.com wrote:

 But I always bothered me that this state is not serializable...


 I am not quite sure how to respond to that. You seem to be asking for
 magic.

 That kind of state has never been sanely serializeable. Not in Haskell,
 not anywhere else. The usual hack is to dump an entire memory image to
 disk, either as an executable (see gcore and undump; also see how the
 GNU emacs build dumps a preloaded emacs executable) or by dumping the
 data segment as raw bytes and reloading it as such (which doesn't work so
 well in modern demand paged executables; it can work better with a virtual
 machine environment, and various Lisp and Smalltalk implementations dump
 and reload their raw VM images this way).

 I would not be surprised if what you seem to be asking for turns out to be
 yet another guise of the halting problem.

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: Win32-services

2013-03-14 Thread Michael Steele
I uploaded a new packaged named Win32-services. This library is a partial
binding to the Win32 System Services API. It's now easier to write Windows
service applications in Haskell.

The hackage page http://hackage.haskell.org/package/Win32-services [1]
demonstrates simple usage. There are also 2 examples included with the
sources. One is a translation of Microsoft's official example.

[1]: http://hackage.haskell.org/package/Win32-services

-- Michael Steele
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Join a transformer

2013-03-13 Thread Michael Snoyman
I'm wondering if this pattern exists and has a name. We have the concept of
joining a Monad:

join :: Monad m = m (m a) - ma

How about joining a monad transformer?

joinT :: (Monad m, MonadTrans t) = t (t m) a - t m a

I believe implementing this in terms of MonadTransControl[1] might be
possible, but I was wondering if there's an already existing idiom for this.

Michael

[1]
http://haddocks.fpcomplete.com/fp/7.4.2/20130301-40/monad-control/Control-Monad-Trans-Control.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-12 Thread Simon Michael

Hi Brent,

hledger is an existing project whose purpose, code and installation 
process is relatively simple. I'm happy to do a bit of mentoring. If 
this sounds suitable, I can suggest some easy fixes or enhancements, eg:


...hmm. In fact nothing on my long wishlist[1][2] looks all that quick. 
They're kind of tricky, or require a fair bit of architectural 
knowledge, or they are unglamorous and boring. (I'd love to be proven 
wrong.)


shelltestrunner[3] or rss2irc[4] are much smaller projects, but their 
backlogs are not all that pretty either. If any of these are of interest 
let me know and I can look harder for suitable jobs.


-Simon


[1] 
https://code.google.com/p/hledger/issues/list?can=2q=colspec=ID+Type+Status+Summary+Reporter+Opened+Starssort=groupby=mode=gridy=Componentx=Statuscells=tilesnobtn=Update


[2] http://hub.darcs.net/simon/hledger/NOTES.org#2140

[3] http://hub.darcs.net/simon/shelltestrunner/browse/NOTES.org

[4] http://hub.darcs.net/simon/shelltestrunner/browse/NOTES.org


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-12 Thread Simon Michael

[4] http://hub.darcs.net/simon/rss2irc/browse/NOTES.org

On 3/12/13 2:13 PM, Simon Michael wrote:

Hi Brent,

hledger is an existing project whose purpose, code and installation
process is relatively simple. I'm happy to do a bit of mentoring. If
this sounds suitable, I can suggest some easy fixes or enhancements, eg:

...hmm. In fact nothing on my long wishlist[1][2] looks all that quick.
They're kind of tricky, or require a fair bit of architectural
knowledge, or they are unglamorous and boring. (I'd love to be proven
wrong.)

shelltestrunner[3] or rss2irc[4] are much smaller projects, but their
backlogs are not all that pretty either. If any of these are of interest
let me know and I can look harder for suitable jobs.

-Simon


[1]
https://code.google.com/p/hledger/issues/list?can=2q=colspec=ID+Type+Status+Summary+Reporter+Opened+Starssort=groupby=mode=gridy=Componentx=Statuscells=tilesnobtn=Update

[2] http://hub.darcs.net/simon/hledger/NOTES.org#2140

[3] http://hub.darcs.net/simon/shelltestrunner/browse/NOTES.org

[4] http://hub.darcs.net/simon/shelltestrunner/browse/NOTES.org





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-11 Thread Michael Orlitzky
On 03/11/2013 11:48 AM, Brent Yorgey wrote:
 
 So I'd like to do it again this time around, and am looking for
 particular projects I can suggest to them.  Do you have an open-source
 project with a few well-specified tasks that a relative beginner (see
 below) could reasonably make a contribution towards in the space of
 about four weeks? I'm aware that most tasks don't fit that profile,
 but even complex projects usually have a few simple-ish tasks that
 haven't yet been done just because no one has gotten around to it
 yet.

It's not exciting, but adding doctest suites with examples to existing
packages would be a great help.

  * Good return on investment.

  * Not too hard.

  * The project is complete when you stop typing.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] File I/O benchmark help (conduit, io-streams and Handle)

2013-03-09 Thread Michael Snoyman
Just to clarify: the problem was in fact with my code, I was not passing
O_TRUNC to the open system call. Gregory's C code showed me the problem.
Once I add in that option, all the different benchmarks complete in roughly
the same amount of time. So given that our Haskell implementations based on
Handle are just about as fast as a raw C implementation, I'd say Handle is
performing very well.

Apologies if I got anyone overly concerned.


On Fri, Mar 8, 2013 at 12:36 PM, Simon Marlow marlo...@gmail.com wrote:

 1GB/s for copying a file is reasonable - it's around half the memory
 bandwidth, so copying the data twice would give that result (assuming no
 actual I/O is taking place, which is what you want because actual I/O will
 swamp any differences at the software level).

 The Handle overhead should be negligible if you're only using hGetBufSome
 and hPutBuf, because those functions basically just call read() and write()
 when the amount of data is larger than the buffer size.

 There's clearly something suspicious going on here, unfortunately I don't
 have time right now to investigate, but I'll keep an eye on the thread.

 Cheers,
 Simon


 On 08/03/13 08:36, Gregory Collins wrote:

 +Simon Marlow
 A couple of comments:

   * maybe we shouldn't back the file by a Handle. io-streams does this

 by default out of the box; I had a posix file interface for unix
 (guarded by CPP) for a while but decided to ditch it for simplicity.
 If your results are correct, given how slow going by Handle seems to
 be I may revisit this, I figured it would be good enough.
   * io-streams turns Handle buffering off in withFileAsOutput. So the

 difference shouldn't be as a result of buffering. Simon: is this an
 expected result? I presume you did some Handle debugging?
   * the IO manager should not have any bearing here because file code

 doesn't actually ever use it (epoll() doesn't work for files)
   * does the difference persist when the file size gets bigger?
   * your file descriptor code doesn't handle EINTR properly, although

 you said you checked that the file copy is being done?
   * Copying a 1MB file in 1ms gives a throughput of ~1GB/s. The other

 methods have a more believable ~70MB/s throughput.

 G


 On Fri, Mar 8, 2013 at 7:30 AM, Michael Snoyman mich...@snoyman.com
 mailto:mich...@snoyman.com wrote:

 Hi all,

 I'm turning to the community for some help understanding some
 benchmark results[1]. I was curious to see how the new io-streams
 would work with conduit, as it looks like a far saner low-level
 approach than Handles. In fact, the API is so simple that the entire
 wrapper is just a few lines of code[2].

 I then added in some basic file copy benchmarks, comparing
 conduit+Handle (with ResourceT or bracket), conduit+io-streams,
 straight io-streams, and lazy I/O. All approaches fell into the same
 ballpark, with conduit+bracket and conduit+io-streams taking a
 slight lead. (I haven't analyzed that enough to know if it means
 anything, however.)

 Then I decided to pull up the NoHandle code I wrote a while ago for
 conduit. This code was written initially for Windows only, to work
 around the fact that System.IO.openFile does some file locking. To
 avoid using Handles, I wrote a simple FFI wrapper exposing open,
 read, and close system calls, ported it to POSIX, and hid it behind
 a Cabal flag. Out of curiosity, I decided to expose it and include
 it in the benchmark.

 The results are extreme. I've confirmed multiple times that the copy
 algorithm is in fact copying the file, so I don't think the test
 itself is cheating somehow. But I don't know how to explain the
 massive gap. I've run this on two different systems. The results you
 see linked are from my local machine. On an EC2 instance, the gap
 was a bit smaller, but the NoHandle code was still 75% faster than
 the others.

 My initial guess is that I'm not properly tying into the IO manager,
 but I wanted to see if the community had any thoughts. The relevant
 pieces of code are [3][4][5].

 Michael

 [1] 
 http://static.snoyman.com/**streams.htmlhttp://static.snoyman.com/streams.html
 [2]
 https://github.com/snoyberg/**conduit/blob/streams/io-**
 streams-conduit/Data/Conduit/**Streams.hshttps://github.com/snoyberg/conduit/blob/streams/io-streams-conduit/Data/Conduit/Streams.hs
 [3]
 https://github.com/snoyberg/**conduit/blob/streams/conduit/**
 System/PosixFile.hschttps://github.com/snoyberg/conduit/blob/streams/conduit/System/PosixFile.hsc
 [4]
 https://github.com/snoyberg/**conduit/blob/streams/conduit/**
 Data/Conduit/Binary.hs#L54https://github.com/snoyberg/conduit/blob/streams/conduit/Data/Conduit/Binary.hs#L54
 [5]
 https://github.com/snoyberg/**conduit/blob/streams/conduit/**
 Data/Conduit/Binary.hs#L167https://github.com/snoyberg

Re: [Haskell-cafe] File I/O benchmark help (conduit, io-streams and Handle)

2013-03-08 Thread Michael Snoyman
That demonstrated the issue: I'd forgotten to pass O_TRUNC to the open
system call. Adding that back makes the numbers much more comparable.

Thanks for the input everyone, and Gregory for finding the actual problem
(as well as pointing out a few other improvements).


On Fri, Mar 8, 2013 at 12:13 PM, Gregory Collins g...@gregorycollins.netwrote:

 Something must be wrong with the conduit NoHandle code. I increased the
 filesize to 60MB and implemented the copy loop in pure C, the code and
 results are here:

 https://gist.github.com/gregorycollins/5115491

 Everything but the conduit NoHandle code runs in roughly 600-620ms,
 including the pure C version.

 G


 On Fri, Mar 8, 2013 at 10:13 AM, Alexander Kjeldaas 
 alexander.kjeld...@gmail.com wrote:




 On Fri, Mar 8, 2013 at 9:53 AM, Gregory Collins 
 g...@gregorycollins.netwrote:

 On Fri, Mar 8, 2013 at 9:48 AM, John Lato jwl...@gmail.com wrote:

 For comparison, on my system I get
 $ time cp input.dat output.dat

 real 0m0.004s
 user 0m0.000s
 sys 0m0.000s


 Does your workstation have an SSD? Michael's using a spinning disk.


 If you're only copying a GB or so, it should only be memory traffic.

 Alexander



 --
 Gregory Collins g...@gregorycollins.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





 --
 Gregory Collins g...@gregorycollins.net

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] File I/O benchmark help (conduit, io-streams and Handle)

2013-03-07 Thread Michael Snoyman
Hi all,

I'm turning to the community for some help understanding some benchmark
results[1]. I was curious to see how the new io-streams would work with
conduit, as it looks like a far saner low-level approach than Handles. In
fact, the API is so simple that the entire wrapper is just a few lines of
code[2].

I then added in some basic file copy benchmarks, comparing conduit+Handle
(with ResourceT or bracket), conduit+io-streams, straight io-streams, and
lazy I/O. All approaches fell into the same ballpark, with conduit+bracket
and conduit+io-streams taking a slight lead. (I haven't analyzed that
enough to know if it means anything, however.)

Then I decided to pull up the NoHandle code I wrote a while ago for
conduit. This code was written initially for Windows only, to work around
the fact that System.IO.openFile does some file locking. To avoid using
Handles, I wrote a simple FFI wrapper exposing open, read, and close system
calls, ported it to POSIX, and hid it behind a Cabal flag. Out of
curiosity, I decided to expose it and include it in the benchmark.

The results are extreme. I've confirmed multiple times that the copy
algorithm is in fact copying the file, so I don't think the test itself is
cheating somehow. But I don't know how to explain the massive gap. I've run
this on two different systems. The results you see linked are from my local
machine. On an EC2 instance, the gap was a bit smaller, but the NoHandle
code was still 75% faster than the others.

My initial guess is that I'm not properly tying into the IO manager, but I
wanted to see if the community had any thoughts. The relevant pieces of
code are [3][4][5].

Michael

[1] http://static.snoyman.com/streams.html
[2]
https://github.com/snoyberg/conduit/blob/streams/io-streams-conduit/Data/Conduit/Streams.hs
[3]
https://github.com/snoyberg/conduit/blob/streams/conduit/System/PosixFile.hsc
[4]
https://github.com/snoyberg/conduit/blob/streams/conduit/Data/Conduit/Binary.hs#L54
[5]
https://github.com/snoyberg/conduit/blob/streams/conduit/Data/Conduit/Binary.hs#L167
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] File I/O benchmark help (conduit, io-streams and Handle)

2013-03-07 Thread Michael Snoyman
One clarification: it seems that sourceFile and sourceFileNoHandle have
virtually no difference in speed. The gap comes exclusively from sinkFile
vs sinkFileNoHandle. This makes me think that it might be a buffer copy
that's causing the slowdown, in which case the benchmark may in fact be
accurate.
On Mar 8, 2013 8:30 AM, Michael Snoyman mich...@snoyman.com wrote:

 Hi all,

 I'm turning to the community for some help understanding some benchmark
 results[1]. I was curious to see how the new io-streams would work with
 conduit, as it looks like a far saner low-level approach than Handles. In
 fact, the API is so simple that the entire wrapper is just a few lines of
 code[2].

 I then added in some basic file copy benchmarks, comparing conduit+Handle
 (with ResourceT or bracket), conduit+io-streams, straight io-streams, and
 lazy I/O. All approaches fell into the same ballpark, with conduit+bracket
 and conduit+io-streams taking a slight lead. (I haven't analyzed that
 enough to know if it means anything, however.)

 Then I decided to pull up the NoHandle code I wrote a while ago for
 conduit. This code was written initially for Windows only, to work around
 the fact that System.IO.openFile does some file locking. To avoid using
 Handles, I wrote a simple FFI wrapper exposing open, read, and close system
 calls, ported it to POSIX, and hid it behind a Cabal flag. Out of
 curiosity, I decided to expose it and include it in the benchmark.

 The results are extreme. I've confirmed multiple times that the copy
 algorithm is in fact copying the file, so I don't think the test itself is
 cheating somehow. But I don't know how to explain the massive gap. I've run
 this on two different systems. The results you see linked are from my local
 machine. On an EC2 instance, the gap was a bit smaller, but the NoHandle
 code was still 75% faster than the others.

 My initial guess is that I'm not properly tying into the IO manager, but I
 wanted to see if the community had any thoughts. The relevant pieces of
 code are [3][4][5].

 Michael

 [1] http://static.snoyman.com/streams.html
 [2]
 https://github.com/snoyberg/conduit/blob/streams/io-streams-conduit/Data/Conduit/Streams.hs
 [3]
 https://github.com/snoyberg/conduit/blob/streams/conduit/System/PosixFile.hsc
 [4]
 https://github.com/snoyberg/conduit/blob/streams/conduit/Data/Conduit/Binary.hs#L54
 [5]
 https://github.com/snoyberg/conduit/blob/streams/conduit/Data/Conduit/Binary.hs#L167

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simple way to do something like ArrowChoice.right on a Conduit? (version 1.0.0)

2013-03-05 Thread Michael Snoyman
Wow, I hadn't realized that someone had implemented resumable sinks... and
now resumable conduits too! Very interesting.

I'm not sure if I entirely understand your use case, but in general it
should be possible to have multiple Conduits running one after the other.
Here's an example of restarting an accumulator after every multiple of 5:

https://www.fpcomplete.com/user/snoyberg/random-code-snippets/multiple-conduits

Michael


On Mon, Mar 4, 2013 at 6:55 PM, Joey Adams joeyadams3.14...@gmail.comwrote:

 On Sun, Mar 3, 2013 at 10:24 PM, Joey Adams joeyadams3.14...@gmail.comwrote:

 ...

 Here's a possible API for a resumable Conduit:

 newtype ResumableConduit i m o = -- hidden --

 newResumableConduit :: Monad m = Conduit i m o - ResumableConduit i
 m o

 -- | Feed the 'Source' through the conduit, and send any output from
 the
 -- conduit to the 'Sink'.  When the 'Sink' returns, close the
 'Source', but
 -- leave the 'ResumableConduit' open so more data can be passed
 through it.
 runResumableConduit
 :: Monad m
 = ResumableConduit i m o
 - Source m i
 - Sink o m r
 - m (ResumableConduit i m o, r)
 ...


  While trying to implement this, I found a more elegant interface for
 resuming the ResumableConduit:

 -- | Fuse a 'ResumableConduit' to a 'Sink'.  When the 'Sink' returns,
 -- it returns the 'ResumableConduit' so the caller can reuse it.
 (=$++) :: Monad m

= ResumableConduit i m o
- Sink o m r
- Sink i m (ResumableConduit i m o, r)

 This takes advantage of Sink's return value to forward the
 ResumableConduit.  I don't think a ($=++) can be implemented.

 Advantages:

  * (=$++) is easier to implement than 'runResumableConduit' since it only
 has to fuse two pipes together instead of three.

  * Pretty syntax: (resumable', a) - source $$ resumable =$++ sink

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simple way to do something like ArrowChoice.right on a Conduit? (version 1.0.0)

2013-03-05 Thread Michael Snoyman
On Wed, Mar 6, 2013 at 5:48 AM, Joey Adams joeyadams3.14...@gmail.comwrote:

 On Tue, Mar 5, 2013 at 9:24 AM, Michael Snoyman mich...@snoyman.comwrote:

 ...

 I'm not sure if I entirely understand your use case, but in general it
 should be possible to have multiple Conduits running one after the other.
 Here's an example of restarting an accumulator after every multiple of 5:


 https://www.fpcomplete.com/user/snoyberg/random-code-snippets/multiple-conduits


 Neat.  I didn't think to do that with plain Conduits.  I did realize I
 could use a resumable conduit as a temporary filter (basically what your
 example does).  This suggests that a resumable conduit can be used in any
 consumer (Conduit or Sink), not just a sink.  Perhaps it can even be used
 in a producer, though different operators would be needed (+$= instead of
 =$+).

 In my compression example, the incoming message sink needs to feed chunks
 of compressed data to a zlib conduit.  It can't just hand full control of
 the input to zlib; it has to decode messages, and only send CompressedData
 messages through zlib.  I need a resumable conduit for that.


I'm still not sure I follow this. In the example I linked to, the go
function within breaker could arbitrarily modify the data before it gets
passed on to the inner Conduit. So it seems like it should be possible to
achieve your goals this way. But I may just not fully understand your use
case.

Michael


 Here's my current implementation of resumable conduits [1].  I don't know
 much about conduit finalizers; I mostly followed 'connectResume' and
 'pipeL'.

 The main wrinkle is that when the ResumableConduit receives an upstream
 terminator, it forwards it to the sink, rather than telling the conduit
 that the stream ended.  This allows the conduit to be reused.  Only when we
 finish the ResumableConduit () do we send it the stream terminator.

 I'll continue toying with this.  It might be possible to factor out
 terminator forwarding, and generalize connectResume to support resumable
 sources, conduits, and sinks.

 Thanks for the help,
 -Joey

  [1]:
 https://github.com/joeyadams/hs-resumable-conduit/blob/master/ResumableConduit.hs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Simple way to do something like ArrowChoice.right on a Conduit? (version 1.0.0)

2013-03-03 Thread Michael Snoyman
On Fri, Mar 1, 2013 at 4:18 AM, Joey Adams joeyadams3.14...@gmail.comwrote:

 Can I transform a conduit so some values are passed through unchanged, but
 others go through the conduit?  For example:

 right :: Conduit i m o - Conduit (Either x i) m (Either x o)

 This is named after the Control.Arrow combinator of the same name:

 right :: ArrowChoice a = a b c - a (Either d b) (Either d c)

 Here's my use case (simplified): I want to compress data with
 zlib-conduit, which provides:

 compress :: Conduit (Flush ByteString) m (Flush ByteString)

 The 
 Flushhttp://hackage.haskell.org/packages/archive/conduit/latest/doc/html/Data-Conduit.html#t:Flushwrapper
  lets me flush the compressor so it will yield cached data right
 away (though hurting compression a little).

 But before compressing the data, I want to encode it, using this conduit:

 encode :: Conduit Entry m ByteString

 I want to combine these, so that if I send a 'Flush', it bypasses 'encode'
 and feeds to 'compress':

 compressEncode :: Conduit (Flush Entry) m (Flush ByteString)

 Thus, I need a variant of 'encode' that passes 'Flush' along:

 encode' :: Conduit (Flush Entry) m (Flush ByteString)

 In my actual program, I don't use Flush, so providing a Conduit combinator
 just for Flush would not help me.

 Is something like 'right' possible to implement with Conduit's public
 API?  Here's an implementation using Data.Conduit.Internal (untested):

 import Control.Monad (liftM)
 import Data.Conduit.Internal (Pipe(..), ConduitM(..), Conduit)

 right :: Monad m = Conduit i m o - Conduit (Either x i) m (Either x
 o)
 right = ConduitM . rightPipe . unConduitM

 rightPipe :: Monad m
   = Pipe i i o () m ()
   - Pipe (Either x i) (Either x i) (Either x o) () m ()
 rightPipe p0 = case p0 of
 HaveOutput p c o  - HaveOutput (rightPipe p) c (Right o)
 NeedInput p c - NeedInput p' (rightPipe . c)
   where p' (Left x)  = HaveOutput (rightPipe p0) (return ()) (Left
 x)
 p' (Right i) = rightPipe $ p i
 Done r- Done r
 PipeM mp  - PipeM $ liftM rightPipe mp
 Leftover p i  - Leftover (rightPipe p) (Right i)


I'm fairly certain this cannot be implemented using only the public API.
Your implementation looks solid to me.


 I'm wondering if we could have a Data.Conduit.Arrow module, which provides
 a newtype variant of Conduit that implements Arrow, ArrowChoice, etc.:

 import qualified Data.Conduit as C

 newtype Conduit m i o = Conduit (C.Conduit i m o)

 -- May need Monad constraints for these
 instance Category (Conduit m)
 instance Arrow (Conduit m)
 instance ArrowChoice (Conduit m)


As I think you point out in your next email, Conduit can't really be an
instance of Arrow. IIRC, there was quite a bit of talk about that when
pipes came out, but some of the features of a Pipe (such as allowing input
and output to occur at different speeds) means that it can't be achieved.
Nonetheless, I think adding some helping combinators based around Arrows
for Conduit makes sense.


 Does 'Conduit' follow Category, Monad, MonadTrans laws* these days?  I'm
 not talking about Pipe in general, just the special case of it represented
 by the 'Conduit' type alias:

 Conduit i m o = ConduitM i o m () = Pipe i i o () m ()

 Or are there some thorny issues (e.g. leftovers) that make following these
 laws impossible in some cases?


It's easy to prove that a Conduit with leftovers does not follow the
Category laws:

id = awaitForever yield
(.) = (=$=)

id . leftover x /= leftover x

That was the motivation for adding the leftover type parameter to the Pipe
datatype: if you want to get closer to a Category instance (whatever
closer would mean here), you need to make sure that the leftover
parameter is set to Void. However, even in such a case, there's at least
one deviation from strict Category behavior. The order in which finalizers
are run does not fully respect the associative laws[1]. In this case, the
deviation is intentional: conduit is more concerned with ensuring strict
resource usage than associativity. I touched on this point briefly in a
recent conduit 1.0 blog post.

In my opinion, this is evidence that Category is not the right abstraction
to be used for streaming data, since it doesn't give us the ability to
guarantee prompt finalization.

[1] https://github.com/snoyberg/conduit/pull/57


  Thanks for the input,
 -Joey

  * Assume functions that use Data.Conduit.Internal do so correctly.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Future of MonadCatchIO

2013-03-03 Thread Michael Snoyman
On Sun, Mar 3, 2013 at 6:07 PM, Ertugrul Söylemez e...@ertes.de wrote:

 Arie Peterson ar...@xs4all.nl wrote:

  Would anyone have a problem with a deprecation of
  MonadCatchIO-transformers, and a failure to update it to work with a
  base without 'block' and 'unblock'?

 Yes.  This is a simplified variant of a monad I use:

 newtype Continue f m a = Continue (m (Maybe a, f (Continue f a)))

 It's related to Cofree and has a valid and very straightforward
 MonadCatchIO instance.  However, it's probably impossible to write a
 valid MonadTransControl/MonadBaseControl instance for it.


Perhaps there's a good reason why it's impossible to make such an instance.
Are you sure that your MonadCatchIO instance is well founded? What happens
if you use finally? Are you guaranteed that your cleanup function is called
once, and precisely once?

These are the problems I ran into with MonadCatchIO three years ago, almost
precisely. The main monad for Yesod was built around ContT, and I ended up
with double-free bugs. It's true that I had to move away from ContT in
order to get the desired semantics, but that has nothing to do with
MonadCatchIO vs monad-control. The former just made it seem like I had
working code when in fact I had a lurking bug.


 So I kindly ask you not to deprecate MonadCatchIO.  The reason I'm
 hesitant about moving to monad-control is that it's hard to understand
 and also very difficult to define for CPS monads.  It is commonly
 believed to be impossible.

 Also I've seen at least one article about the incorrectness of
 monad-control.  That's one further reason I like to avoid it.


I've seen the criticisms of monad-control (or at least I believe I have).
What I've seen has been dubious at best. I'll fully agree that the
implementation is hard to follow, but it's designed for efficiency. The
underlying concept is simple: capture the current state and pipe it through
the underlying monad. If you needed to lift a control operation for the
ReaderT or StateT monads, you would likely end up with an almost exact
replica of what monad-control does for you.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RFC: rewrite-with-location proposal

2013-02-26 Thread Michael Snoyman
On Tue, Feb 26, 2013 at 12:06 PM, Simon Peyton-Jones
simo...@microsoft.comwrote:

  Do you mean that the proposal itself won't work, or specifically
 implementing this features in terms of existing rewrite rules won't work?*
 ***

 ** **

 I meant the latter.

 ** **

 I'll admit to ignorance on the internals of GHC, but it seems like doing
 the shallow source location approach would be far simpler than a full
 trace. I'd hate to lose a very valuable feature because we can't implement
 the perfect feature.

 ** **

 I agree with that sentiment. But in fact I suspect that getting a stack is
 little or no harder than the shallow thing.

 ** **

 My “implicit parameter” suggestion was trying to re-use an existing
 feature, with a small twist, to do what you want, rather than to implement
 something brand new.


I personally have very little opinion about how this feature is
implemented. But would this approach implement the shallow trace, or the
full stack trace?

Michael


  

 Simon

 ** **

 *From:* michael.snoy...@gmail.com [mailto:michael.snoy...@gmail.com] *On
 Behalf Of *Michael Snoyman
 *Sent:* 25 February 2013 18:19
 *To:* Simon Peyton-Jones
 *Cc:* Alexander Kjeldaas; Simon Hengel; Haskell Cafe

 *Subject:* Re: [Haskell-cafe] RFC: rewrite-with-location proposal

  ** **

 ** **

 ** **

 On Mon, Feb 25, 2013 at 4:42 PM, Simon Peyton-Jones simo...@microsoft.com
 wrote:

 I’m afraid the rewrite-rule idea won’t work.  RULES are applied during
 optimisation, when tons of inlining has happened and the program has been
 shaken around a lot. No reliable source location information is available
 there.

  

 ** **

 Do you mean that the proposal itself won't work, or specifically
 implementing this features in terms of existing rewrite rules won't work?*
 ***

  

  See http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack; and
 please edit it.

  

  ** **

 One thing I'd disagree with on that page is point (3). While it's
 certainly nice to have a full stack trace, implementing just shallow call
 information is incredibly useful. For logging and test framework usages, it
 in fact completely covers the use case. And even for debugging, I think it
 would be a massive step in the right direction.

 ** **

 I'll admit to ignorance on the internals of GHC, but it seems like doing
 the shallow source location approach would be far simpler than a full
 trace. I'd hate to lose a very valuable feature because we can't implement
 the perfect feature.

  

  One idea I had, which that page does not yet describe, is to have an
 implicit parameter,
 something like ?loc::Location, with

   errLoc :: ?loc:Location = String - a

   errLoc s = error (“At “ ++ ?loc ++ “\n” ++ s)

  

 This behave exactly like an ordinary implicit parameter, EXCEPT that if
 there is no binding for ?loc::Location, then the current location is used.
 Thus

  

 myErr :: ?loc:Location = Int - a

 myErr n = errLoc (show n)

  

 foo :: Int - int

 foo n | n0 = myErr n

 | otherwise = ...whatever...

  

 When typechecking ‘foo’ we need ?loc:Location, and so the magic is that we
 use the location of the call of myErr in foo.

  

 Simon

  

  

  

 *From:* haskell-cafe-boun...@haskell.org [mailto:
 haskell-cafe-boun...@haskell.org] *On Behalf Of *Alexander Kjeldaas
 *Sent:* 25 February 2013 12:16
 *To:* Simon Hengel
 *Cc:* Haskell Cafe
 *Subject:* Re: [Haskell-cafe] RFC: rewrite-with-location proposal

  

 On Mon, Feb 25, 2013 at 12:46 PM, Simon Hengel s...@typeful.net wrote:***
 *

  On Mon, Feb 25, 2013 at 10:40:29AM +0100, Twan van Laarhoven wrote:
  I think there is no need to have a separate REWRITE_WITH_LOCATION
  rule. What if the compiler instead rewrites 'currentLocation' to the
  current location? Then you'd just define the rule:
 
  {-# REWRITE errorLoc error = errorLoc currentLocation #-}

 REWRITE rules are only enabled with -O.  Source locations are also
 useful during development (when you care more about compilation time
 than efficient code and hence use -O0).  So I'm not sure whether it's a
 good idea to lump those two things together.

   

 I could imagine that source locations being useful when debugging rewrite
 rules for example.

  

 I think your argument makes sense, but why not fix that specifically?

  

 {-# REWRITE ALWAYS errorLoc error = errorLoc currentLocation #-}

  

 Alexander

  


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

  ** **

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RFC: rewrite-with-location proposal

2013-02-25 Thread Michael Snoyman
On Mon, Feb 25, 2013 at 11:13 AM, Simon Hengel s...@typeful.net wrote:

 On Mon, Feb 25, 2013 at 09:57:04AM +0100, Joachim Breitner wrote:
  Hi,
 
  Am Montag, den 25.02.2013, 08:06 +0200 schrieb Michael Snoyman:
   Quite a while back, Simon Hengel and I put together a proposal[1] for
   a new feature in GHC. The basic idea is pretty simple: provide a new
   pragma that could be used like so:
  
   error :: String - a
   errorLoc :: IO Location - String - a
   {-# REWRITE_WITH_LOCATION error errorLoc #-}
 
  in light of attempts to split base into a pure part (without IO) and
  another part, I wonder if the IO wrapping is really necessary.
 
  Can you elaborate the reason why a simple Location - is not enough?

 The IO helps with reasoning.  Without it you could write code that does
 something different depending on the call site.  Here is an example:


 someBogusThingy :: Int
 someBogusThingy = ..

 someBogusThingyLoc :: Location - Int
 someBogusThingyLoc loc
   | (even . getLine) loc = 23
   | otherwise= someBogusThingyLoc

 {-# REWRITE_WITH_LOCATION someBogusThingy someBogusThingyLoc #-}

 Now someBogusThingy behaves different depending on whether the call site
 is on an even or uneven line number.  Admittedly, the example is
 contrived, but I hope it illustrates the issue.

 I do not insist on keeping it.  If we, as a community, decide, that we
 do not need the IO here.  Then I'm fine with dropping it.


And FWIW, my vote *does* go towards dropping it. I put this proposal in the
same category as rewrite rules in general: it's certainly possible for a
bad implementation to wreak havoc, but it's the responsibility of the
person using the rewrite rules to ensure that doesn't happen.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RFC: rewrite-with-location proposal

2013-02-25 Thread Michael Snoyman
On Mon, Feb 25, 2013 at 2:15 PM, Alexander Kjeldaas 
alexander.kjeld...@gmail.com wrote:

 On Mon, Feb 25, 2013 at 12:46 PM, Simon Hengel s...@typeful.net wrote:

 On Mon, Feb 25, 2013 at 10:40:29AM +0100, Twan van Laarhoven wrote:
  I think there is no need to have a separate REWRITE_WITH_LOCATION
  rule. What if the compiler instead rewrites 'currentLocation' to the
  current location? Then you'd just define the rule:
 
  {-# REWRITE errorLoc error = errorLoc currentLocation #-}

 REWRITE rules are only enabled with -O.  Source locations are also
 useful during development (when you care more about compilation time
 than efficient code and hence use -O0).  So I'm not sure whether it's a
 good idea to lump those two things together.


 I could imagine that source locations being useful when debugging rewrite
 rules for example.

 I think your argument makes sense, but why not fix that specifically?

 {-# REWRITE ALWAYS errorLoc error = errorLoc currentLocation #-}



At that point, we've now made two changes to REWRITE rules:

1. They can takes a new ALWAYS parameters.
2. There's a new, special identifier currentLocation available.

What would be the advantage is of that approach versus introducing a single
new REWRITE_WITH_LOCATION pragma?

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] RFC: rewrite-with-location proposal

2013-02-25 Thread Michael Snoyman
On Mon, Feb 25, 2013 at 4:42 PM, Simon Peyton-Jones
simo...@microsoft.comwrote:

  I’m afraid the rewrite-rule idea won’t work.  RULES are applied during
 optimisation, when tons of inlining has happened and the program has been
 shaken around a lot. No reliable source location information is available
 there.

 **


Do you mean that the proposal itself won't work, or specifically
implementing this features in terms of existing rewrite rules won't work?


 **

 See http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack; and
 please edit it.

 **


One thing I'd disagree with on that page is point (3). While it's certainly
nice to have a full stack trace, implementing just shallow call information
is incredibly useful. For logging and test framework usages, it in fact
completely covers the use case. And even for debugging, I think it would be
a massive step in the right direction.

I'll admit to ignorance on the internals of GHC, but it seems like doing
the shallow source location approach would be far simpler than a full
trace. I'd hate to lose a very valuable feature because we can't implement
the perfect feature.


 **

 One idea I had, which that page does not yet describe, is to have an
 implicit parameter,
 something like ?loc::Location, with

   errLoc :: ?loc:Location = String - a

   errLoc s = error (“At “ ++ ?loc ++ “\n” ++ s)

 ** **

 This behave exactly like an ordinary implicit parameter, EXCEPT that if
 there is no binding for ?loc::Location, then the current location is used.
 Thus

 ** **

 myErr :: ?loc:Location = Int - a

 myErr n = errLoc (show n)

 ** **

 foo :: Int - int

 foo n | n0 = myErr n

 | otherwise = ...whatever...

 ** **

 When typechecking ‘foo’ we need ?loc:Location, and so the magic is that we
 use the location of the call of myErr in foo.

 ** **

 Simon

 ** **

 ** **

 ** **

 *From:* haskell-cafe-boun...@haskell.org [mailto:
 haskell-cafe-boun...@haskell.org] *On Behalf Of *Alexander Kjeldaas
 *Sent:* 25 February 2013 12:16
 *To:* Simon Hengel
 *Cc:* Haskell Cafe
 *Subject:* Re: [Haskell-cafe] RFC: rewrite-with-location proposal

 ** **

 On Mon, Feb 25, 2013 at 12:46 PM, Simon Hengel s...@typeful.net wrote:***
 *

  On Mon, Feb 25, 2013 at 10:40:29AM +0100, Twan van Laarhoven wrote:
  I think there is no need to have a separate REWRITE_WITH_LOCATION
  rule. What if the compiler instead rewrites 'currentLocation' to the
  current location? Then you'd just define the rule:
 
  {-# REWRITE errorLoc error = errorLoc currentLocation #-}

 REWRITE rules are only enabled with -O.  Source locations are also
 useful during development (when you care more about compilation time
 than efficient code and hence use -O0).  So I'm not sure whether it's a
 good idea to lump those two things together.

  ** **

 I could imagine that source locations being useful when debugging rewrite
 rules for example.

 ** **

 I think your argument makes sense, but why not fix that specifically?

 ** **

 {-# REWRITE ALWAYS errorLoc error = errorLoc currentLocation #-}

 ** **

 Alexander

 ** **

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RFC: rewrite-with-location proposal

2013-02-24 Thread Michael Snoyman
Quite a while back, Simon Hengel and I put together a proposal[1] for a new
feature in GHC. The basic idea is pretty simple: provide a new pragma that
could be used like so:

error :: String - a
errorLoc :: IO Location - String - a
{-# REWRITE_WITH_LOCATION error errorLoc #-}

Then all usages of `error` would be converted into calls to `errorLoc` by
the compiler, passing in the location information of where the call
originated from. Our three intended use cases are:

* Locations for failing test cases in a test framework
* Locations for log messages
* assert/error/undefined

Note that the current behavior of the assert function[2] already includes
this kind of approach, but it is a special case hard-coded into the
compiler. This proposal essentially generalizes that behavior and makes it
available for all functions, whether included with GHC or user-defined.

The proposal spells out some details of this approach, and contrasts with
other methods being used today for the same purpose, such as TH and CPP.

Michael

[1] https://github.com/sol/rewrite-with-location
[2]
http://hackage.haskell.org/packages/archive/base/4.6.0.1/doc/html/Control-Exception.html#v:assert
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fixing undeduceable instance == overlapping instance

2013-02-23 Thread Michael Orlitzky
I'm trying to write a determinant function that works on matrices
parameterized by their dimensions (Peano naturals). If I declare the
following,

  -- Define a class so that we get a different determinant function
  -- on the base/recursive cases.
  class (Eq a, Ring.C a) = Determined m a where
determinant :: (m a) - a

  -- Base case, 1x1 matrices
  instance (Eq a, Ring.C a) = Determined (Mat (S Z) (S Z)) a where
determinant m = m !!! (0,0)

  -- Recursive case, (n+2) x (n+2) matrices.
  instance (Eq a, Ring.C a, Arity n)
 = Determined (Mat (S (S n)) (S (S n))) a where
  determinant m =
...
-- Recursive algorithm, the i,jth minor has dimension
-- (n+1) x (n+1).
foo bar (determinant (minor m i j))

I get an error stating that I'm missing an instance:

  Could not deduce (Determined (Mat (S n) (S n)) a)
  ...

Clearly, I *have* an instance for that case: if n == Z, then it's the
base case. If not, it's the recursive case. But GHC can't figure that
out. So maybe if I define a dummy instance to make it happy, it won't
notice that they overlap?

  instance (Eq a, Ring.C a) = Determined (Mat m m) a where
determinant _ = undefined

No such luck:

   let m = fromList [[1,2],[3,4]] :: Mat2 Int
   determinant m

Overlapping instances for Determined (Mat N2 N2) Int
  arising from a use of `determinant'
Matching instances:
  instance (Eq a, Ring.C a) = Determined (Mat m m) a
-- Defined at Linear/Matrix2.hs:353:10
  instance (Eq a, Ring.C a, Arity n) =
   Determined (Mat (S (S n)) (S (S n))) a
...

I even tried generalizing the (Mat m m) instance definition so that
OverlappingInstances would pick the one I want, but I can't get that to
work either.

Is there some way to massage this?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fixing undeduceable instance == overlapping instance

2013-02-23 Thread Michael Orlitzky
On 02/24/2013 02:14 AM, Karl Voelker wrote:
 On Sat, Feb 23, 2013 at 10:28 PM, Michael Orlitzky mich...@orlitzky.com
 mailto:mich...@orlitzky.com wrote:
 
   -- Recursive case, (n+2) x (n+2) matrices.
   instance (Eq a, Ring.C a, Arity n)
  = Determined (Mat (S (S n)) (S (S n))) a where
   determinant m =
 ...
 -- Recursive algorithm, the i,jth minor has dimension
 -- (n+1) x (n+1).
 foo bar (determinant (minor m i j))
 
 I get an error stating that I'm missing an instance:
 
   Could not deduce (Determined (Mat (S n) (S n)) a)
   ...
 
 
 It looks to me like you just need to add (Determined (Mat (S n) (S n))
 a) into the context of this instance. The problem is that the type
 variable n could be almost anything (at least as far as this instance
 definition knows).
 

So simple, thank you!


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: hackage-proxy 0.1.0.0

2013-02-17 Thread Michael Snoyman
I'd like to announce the first release of a new tool called hackage-proxy.
The purpose is to provide a local proxy for a Hackage server, which somehow
modifies files in transport. The motivating case for this was getting more
meaningful error output from Stackage when compiling against GHC HEAD. When
compiling against actual Hackage, cabal will simply refuse to try to build
packages which have upper version bounds such as base  4.7. This
introduces a big dilemma:

* Package authors do not want to bump the version bounds on their packages
until they've tested against that version.

* It's very difficult to do meaningful tests of GHC until packages on
Hackage have been updated.

Hopefully this package can help resolve the dilemma. Instead of requiring
authors to upload new versions of their packages in order to test them,
this proxy will modify the cabal files it downloads and strip off the
version bounds of specified packages. Then, you can test with a newer
version of GHC and find actual compilation errors instead of version bound
constraints.

## Example Usage

1. cabal install hackage-proxy

2. Run hackage-proxy. By default, it will use the official Hackage server
as the source, drop bounds on the packages base, process, directory,
template-haskell, and Cabal, and serve from port 4200. All of this can be
modified via command-line options.

3. Edit your ~/.cabal/config file. Comment out the
hackage.haskell.orglines, and add in something like the following:

remote-repo: hackage-proxy:http://localhost:4200

4. cabal update

5. cabal install your-package-list-here

I think this can be a very valuable tool for anyone wanting to test out
newer versions of GHC. In addition, as part of my normal Stackage work, I'm
now collecting fairly detailed error logs of a number of packages. If this
would be useful for the GHC team or anyone else, let me know and I can try
and provide the logs somehow.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: rss2irc-1.0, announces RSS/Atom feed updates to IRC

2013-02-15 Thread Simon Michael
I'm pleased to announce a new release of rss2irc, the software behind 
hackagebot on #haskell. rss2irc is an IRC bot that polls an RSS or Atom feed 
and announces updates to an IRC channel, with options for customizing output 
and behavior.  It aims to be an easy-to-use, reliable, well-behaved bot.

Release notes:

1.0 (2013/2/15)

New:

  * more robust item detection and duplicate announcement protection, with 
simpler options
  * easier irc address syntax, drop -p/--port option
  * can poll urls with semicolon parameter separator (eg darcsweb's)
  * can poll https feeds
  * can poll from stdin (-)
  * can poll a file containing multiple copies of a feed (eg for testing)
  * can announce item urls containing percent
  * `--cache-control` option sets a HTTP Cache-Control header
  * `--use-actions` announces with CTCP ACTIONs (like the /me command)

Fixed:

  * updated for GHC 7.6  current libs
  * initialises http properly on microsoft windows
  * builds threaded and optimised by default
  * thread and error handling is more robust, eg don't ignore exceptions in the 
irc writer thread
  * no longer adds stray upload: to IRC messages
  * renamed --dupe-descriptions to `--allow-duplicates`
  * dropped --debug flag
  * new item detection and announcing is more robust
  * announcements on console are clearer
  * a simulated irc connection is not logged unless --debug-irc is used

Best,
-Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-11 Thread Michael Snoyman
OK, after some experimentation, I've decided that this would be something
really cool, but I don't have the experience with lens to do it myself.
Here's what I came up with so far:

https://gist.github.com/snoyberg/4755679

(Also available on School of Haskell[1], but our version of lens is too old
for this snippet.)

So if someone wants to pursue this, I'd be really interested to see the
results.

Michael

[1]
https://haskell.fpcomplete.com/user/snoyberg/random-code-snippets/xml-conduit-lens


On Mon, Feb 11, 2013 at 8:09 AM, Michael Sloan mgsl...@gmail.com wrote:

 I realized that the term payload wouldn't make much sense in the context
 of XML.  What I meant was elementName with elementAttributes (but not
 elementNodes - that's the point).  So, such navigations could yield a
 datatype containing those.

 -Michael


 On Sun, Feb 10, 2013 at 9:41 PM, Michael Sloan mgsl...@gmail.com wrote:

 Err:  That first link into Zipper.hs should instead be:


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L66


 On Sun, Feb 10, 2013 at 9:40 PM, Michael Sloan mgsl...@gmail.com wrote:

 I'm no lens authority by any means, but indeed, it looks like something
 like Cursor / Axis could be done with the lens zipper.


 https://github.com/snoyberg/xml/blob/0367af336e86d723bd9c9fbb49db0f86d1f989e6/xml-enumerator/Text/XML/Cursor/Generic.hs#L38

 This cursor datatype is very much like the (:) zipper type (I'm linking
 to old code, because that's when I understood it - the newer stuff is
 semantically the same, but more efficient, more confusing, and less
 directly relatable):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317

 Which is built out of the following two datatypes:

 1) parent (and the way to rebuild the tree on the way back up) is
 provided by this datatype:


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L74

 2) precedingSibling / followingSibling / node is provided by this
 datatype (which is pretty much the familiar list zipper!):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317


 One way that this would be powerful is that some of the Axis
 constructors could return a zipper.  In particular, all of the axis
 yielding functions except the following would be supported:

 parent, precedingSibling, followingSibling, ancestor, descendent,
 orSelf, check

 This is because zippers can be used for modification, which doesn't work
 out very well when you can navigate to something outside of your focii's
 children.  If we have a new datatype, that represents a node's payload,
 then we could conceivably represent all of the axis yielding operations
 except for parent / ancestor.  However, those operations would be
 navigations to payloads - further xml-hierarchy level navigation would be
 impossible because you'd no longer have references to children.  (further
 navigation into payloads on the other hand, would still be possible)

 So, that's just my thoughts after looking at it a bit - I hope it's
 comprehensible / helpful!  An XML zipper would be pretty awesome.

 -Michael


 On Sun, Feb 10, 2013 at 8:34 PM, Michael Snoyman mich...@snoyman.comwrote:




 On Sun, Feb 10, 2013 at 8:51 PM, grant the...@hotmail.com wrote:

 Michael Snoyman michael at snoyman.com writes:

 

 Hi Michael,

 Just one last thought. Does it make any sense that xml-conduit could be
 rewritten as a lens instead of a cursor? Or leverage the lens package
 somehow?


 That's a really interesting idea, I'd never thought about it before.
 It's definitely something worth playing around with. However, I think in
 this case the Cursor is providing a totally different piece of
 functionality than what lenses would do. The Cursor is really working as a
 Zipper, allowing you to walk the node tree and do queries about preceding
 and following siblings and ancestors.

 Now given that every time I'm on #haskell someone mentions zippers in
 the context of lens, maybe lens *would* solve this use case as well, but
 I'm still a lens novice (if that), so I can't really speak on the matter.
 Maybe someone with more lens experience could provide some insight.

 Either way, some kind of lens add-on sounds really useful.

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-10 Thread Michael Snoyman
On Sun, Feb 10, 2013 at 8:51 PM, grant the...@hotmail.com wrote:

 Michael Snoyman michael at snoyman.com writes:

 

 Hi Michael,

 Just one last thought. Does it make any sense that xml-conduit could be
 rewritten as a lens instead of a cursor? Or leverage the lens package
 somehow?


That's a really interesting idea, I'd never thought about it before. It's
definitely something worth playing around with. However, I think in this
case the Cursor is providing a totally different piece of functionality
than what lenses would do. The Cursor is really working as a Zipper,
allowing you to walk the node tree and do queries about preceding and
following siblings and ancestors.

Now given that every time I'm on #haskell someone mentions zippers in the
context of lens, maybe lens *would* solve this use case as well, but I'm
still a lens novice (if that), so I can't really speak on the matter. Maybe
someone with more lens experience could provide some insight.

Either way, some kind of lens add-on sounds really useful.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-10 Thread Michael Sloan
I'm no lens authority by any means, but indeed, it looks like something
like Cursor / Axis could be done with the lens zipper.

https://github.com/snoyberg/xml/blob/0367af336e86d723bd9c9fbb49db0f86d1f989e6/xml-enumerator/Text/XML/Cursor/Generic.hs#L38

This cursor datatype is very much like the (:) zipper type (I'm linking to
old code, because that's when I understood it - the newer stuff is
semantically the same, but more efficient, more confusing, and less
directly relatable):

https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317

Which is built out of the following two datatypes:

1) parent (and the way to rebuild the tree on the way back up) is provided
by this datatype:

https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L74

2) precedingSibling / followingSibling / node is provided by this datatype
(which is pretty much the familiar list zipper!):

https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317


One way that this would be powerful is that some of the Axis constructors
could return a zipper.  In particular, all of the axis yielding functions
except the following would be supported:

parent, precedingSibling, followingSibling, ancestor, descendent, orSelf,
check

This is because zippers can be used for modification, which doesn't work
out very well when you can navigate to something outside of your focii's
children.  If we have a new datatype, that represents a node's payload,
then we could conceivably represent all of the axis yielding operations
except for parent / ancestor.  However, those operations would be
navigations to payloads - further xml-hierarchy level navigation would be
impossible because you'd no longer have references to children.  (further
navigation into payloads on the other hand, would still be possible)

So, that's just my thoughts after looking at it a bit - I hope it's
comprehensible / helpful!  An XML zipper would be pretty awesome.

-Michael


On Sun, Feb 10, 2013 at 8:34 PM, Michael Snoyman mich...@snoyman.comwrote:




 On Sun, Feb 10, 2013 at 8:51 PM, grant the...@hotmail.com wrote:

 Michael Snoyman michael at snoyman.com writes:

 

 Hi Michael,

 Just one last thought. Does it make any sense that xml-conduit could be
 rewritten as a lens instead of a cursor? Or leverage the lens package
 somehow?


 That's a really interesting idea, I'd never thought about it before. It's
 definitely something worth playing around with. However, I think in this
 case the Cursor is providing a totally different piece of functionality
 than what lenses would do. The Cursor is really working as a Zipper,
 allowing you to walk the node tree and do queries about preceding and
 following siblings and ancestors.

 Now given that every time I'm on #haskell someone mentions zippers in the
 context of lens, maybe lens *would* solve this use case as well, but I'm
 still a lens novice (if that), so I can't really speak on the matter. Maybe
 someone with more lens experience could provide some insight.

 Either way, some kind of lens add-on sounds really useful.

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-10 Thread Michael Sloan
Err:  That first link into Zipper.hs should instead be:

https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L66


On Sun, Feb 10, 2013 at 9:40 PM, Michael Sloan mgsl...@gmail.com wrote:

 I'm no lens authority by any means, but indeed, it looks like something
 like Cursor / Axis could be done with the lens zipper.


 https://github.com/snoyberg/xml/blob/0367af336e86d723bd9c9fbb49db0f86d1f989e6/xml-enumerator/Text/XML/Cursor/Generic.hs#L38

 This cursor datatype is very much like the (:) zipper type (I'm linking
 to old code, because that's when I understood it - the newer stuff is
 semantically the same, but more efficient, more confusing, and less
 directly relatable):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317

 Which is built out of the following two datatypes:

 1) parent (and the way to rebuild the tree on the way back up) is provided
 by this datatype:


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L74

 2) precedingSibling / followingSibling / node is provided by this datatype
 (which is pretty much the familiar list zipper!):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317


 One way that this would be powerful is that some of the Axis constructors
 could return a zipper.  In particular, all of the axis yielding functions
 except the following would be supported:

 parent, precedingSibling, followingSibling, ancestor, descendent, orSelf,
 check

 This is because zippers can be used for modification, which doesn't work
 out very well when you can navigate to something outside of your focii's
 children.  If we have a new datatype, that represents a node's payload,
 then we could conceivably represent all of the axis yielding operations
 except for parent / ancestor.  However, those operations would be
 navigations to payloads - further xml-hierarchy level navigation would be
 impossible because you'd no longer have references to children.  (further
 navigation into payloads on the other hand, would still be possible)

 So, that's just my thoughts after looking at it a bit - I hope it's
 comprehensible / helpful!  An XML zipper would be pretty awesome.

 -Michael


 On Sun, Feb 10, 2013 at 8:34 PM, Michael Snoyman mich...@snoyman.comwrote:




 On Sun, Feb 10, 2013 at 8:51 PM, grant the...@hotmail.com wrote:

 Michael Snoyman michael at snoyman.com writes:

 

 Hi Michael,

 Just one last thought. Does it make any sense that xml-conduit could be
 rewritten as a lens instead of a cursor? Or leverage the lens package
 somehow?


 That's a really interesting idea, I'd never thought about it before. It's
 definitely something worth playing around with. However, I think in this
 case the Cursor is providing a totally different piece of functionality
 than what lenses would do. The Cursor is really working as a Zipper,
 allowing you to walk the node tree and do queries about preceding and
 following siblings and ancestors.

 Now given that every time I'm on #haskell someone mentions zippers in the
 context of lens, maybe lens *would* solve this use case as well, but I'm
 still a lens novice (if that), so I can't really speak on the matter. Maybe
 someone with more lens experience could provide some insight.

 Either way, some kind of lens add-on sounds really useful.

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-10 Thread Michael Sloan
I realized that the term payload wouldn't make much sense in the context
of XML.  What I meant was elementName with elementAttributes (but not
elementNodes - that's the point).  So, such navigations could yield a
datatype containing those.

-Michael


On Sun, Feb 10, 2013 at 9:41 PM, Michael Sloan mgsl...@gmail.com wrote:

 Err:  That first link into Zipper.hs should instead be:


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L66


 On Sun, Feb 10, 2013 at 9:40 PM, Michael Sloan mgsl...@gmail.com wrote:

 I'm no lens authority by any means, but indeed, it looks like something
 like Cursor / Axis could be done with the lens zipper.


 https://github.com/snoyberg/xml/blob/0367af336e86d723bd9c9fbb49db0f86d1f989e6/xml-enumerator/Text/XML/Cursor/Generic.hs#L38

 This cursor datatype is very much like the (:) zipper type (I'm linking
 to old code, because that's when I understood it - the newer stuff is
 semantically the same, but more efficient, more confusing, and less
 directly relatable):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317

 Which is built out of the following two datatypes:

 1) parent (and the way to rebuild the tree on the way back up) is
 provided by this datatype:


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L74

 2) precedingSibling / followingSibling / node is provided by this
 datatype (which is pretty much the familiar list zipper!):


 https://github.com/ekmett/lens/blob/f8dfe3fd444648f61b8594cd672c25e70c8a30ff/src/Control/Lens/Internal/Zipper.hs#L317


 One way that this would be powerful is that some of the Axis constructors
 could return a zipper.  In particular, all of the axis yielding functions
 except the following would be supported:

 parent, precedingSibling, followingSibling, ancestor, descendent, orSelf,
 check

 This is because zippers can be used for modification, which doesn't work
 out very well when you can navigate to something outside of your focii's
 children.  If we have a new datatype, that represents a node's payload,
 then we could conceivably represent all of the axis yielding operations
 except for parent / ancestor.  However, those operations would be
 navigations to payloads - further xml-hierarchy level navigation would be
 impossible because you'd no longer have references to children.  (further
 navigation into payloads on the other hand, would still be possible)

 So, that's just my thoughts after looking at it a bit - I hope it's
 comprehensible / helpful!  An XML zipper would be pretty awesome.

 -Michael


 On Sun, Feb 10, 2013 at 8:34 PM, Michael Snoyman mich...@snoyman.comwrote:




 On Sun, Feb 10, 2013 at 8:51 PM, grant the...@hotmail.com wrote:

 Michael Snoyman michael at snoyman.com writes:

 

 Hi Michael,

 Just one last thought. Does it make any sense that xml-conduit could be
 rewritten as a lens instead of a cursor? Or leverage the lens package
 somehow?


 That's a really interesting idea, I'd never thought about it before.
 It's definitely something worth playing around with. However, I think in
 this case the Cursor is providing a totally different piece of
 functionality than what lenses would do. The Cursor is really working as a
 Zipper, allowing you to walk the node tree and do queries about preceding
 and following siblings and ancestors.

 Now given that every time I'm on #haskell someone mentions zippers in
 the context of lens, maybe lens *would* solve this use case as well, but
 I'm still a lens novice (if that), so I can't really speak on the matter.
 Maybe someone with more lens experience could provide some insight.

 Either way, some kind of lens add-on sounds really useful.

 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] xml conduit

2013-02-09 Thread Michael Snoyman
Hi Grant,

As you might expect from immutable data structures, there's no way to
update in place. The approach you'd take to XSLT: traverse the tree, check
each node, and output a new structure. I put together the following as an
example, but I could certainly imagine adding more combinators to the
Cursor module to make something like this more convenient.

{-# LANGUAGE OverloadedStrings #-}
import Prelude hiding (readFile, writeFile)
import Text.XML
import Text.XML.Cursor

main = do
doc@(Document pro (Element name attrs _) epi) - readFile def test.xml
let nodes = fromDocument doc $/ update
writeFile def output.xml $ Document pro (Element name attrs nodes) epi
  where
update c =
case node c of
NodeElement (Element f attrs _)
| parentIsE c  gparentIsD c -
[ NodeElement $ Element f attrs
[ NodeContent New content
]
]
NodeElement (Element name attrs _) -
[NodeElement $ Element name attrs $ c $/ update]
n - [n]
parentIsE c = not $ null $ parent c = element e
gparentIsD c = not $ null $ parent c = parent = element d

Michael


On Sat, Feb 9, 2013 at 1:31 AM, grant the...@hotmail.com wrote:

 Hi,

 Is there a nice way to update xml. I want to be able to use xml-conduit
 to find a location in the xml and then add/update that node.

 eg xpath from //d/e/f and then change the content at 'f' or add a new node

 a
 ...
   d
 e
   fsome data to change
   /f
 /e
   /d
 ...
 /a


 Thanks for any help,
 Grant


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet another Conduit question

2013-02-04 Thread Michael Snoyman
I think this is probably the right approach. However, there's something
important to point out: flushing based on timing issues must be handled
*outside* of the conduit functionality, since by design conduit will not
allow you to (for example) run `await` for up to a certain amount of time.
You'll probably need to do this outside of your conduit chain, in the
initial Source. It might look something like this:

yourSource = do
mx - timeout somePeriod myAction
yield $ maybe Flush Chunk mx
yourSource


On Sun, Feb 3, 2013 at 5:06 PM, Felipe Almeida Lessa felipe.le...@gmail.com
 wrote:

 I guess you could use the Flush datatype [1] depending on how your
 data is generated.

 Cheers,

 [1]
 http://hackage.haskell.org/packages/archive/conduit/0.5.4.1/doc/html/Data-Conduit.html#t:Flush

 On Fri, Feb 1, 2013 at 6:28 AM, Simon Marechal si...@banquise.net wrote:
  On 01/02/2013 08:21, Michael Snoyman wrote:
  So you're saying you want to keep the same grouping that you had
  originally? Or do you want to batch up a certain number of results?
  There are lots of ways of approaching this problem, and the types don't
  imply nearly enough to determine what you're hoping to achieve here.
 
  Sorry for not being clear. I would like to group them as much as
  possible, that is up to a certain limit, and also within a time
  threshold. I believe that the conduit code will be called only when
  something happens in the conduit, so an actual timer would be useless
  (unless I handle this at the source perhaps, and propagate ticks).
 
  That is why in my first message I talked about stacking things into the
  list until the conduit has no more input available, or a maximum size is
  reached, but was not sure this even made sense.
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet another Conduit question

2013-02-04 Thread Michael Snoyman
On Mon, Feb 4, 2013 at 3:47 PM, Simon Marechal si...@banquise.net wrote:

 On 03/02/2013 16:06, Felipe Almeida Lessa wrote:
  I guess you could use the Flush datatype [1] depending on how your
  data is generated.

 Thank you for this suggestion. I tried to do exactly this by modifying
 my bulk Redis source so that it can timeout and send empty lists [1].
 Then I wrote a few helpers conduits[2], such as :

 concatFlush :: (Monad m) = Integer - Conduit [a] m (Flush a)

 which will convert a stream of [a] into a stream of (Flush a), sending
 Flush whenever it encounters and empty list or it send a tunable amount
 of data downstream.

 I finally modified my examples [3]. I realized then it would be nice to
 have fmap for conduits (but I am not sure how to write such a type
 signature). Suggestions are welcome !


Actually `fmap` already exists on the Pipe datatype, it just probably
doesn't do what you want. It modifies the return value, which is only
relevant for Sinks.

What you probably are looking for is mapOutput[1].

Michael

[1] https://haskell.fpcomplete.com/hoogle?q=mapOutput



 [1]

 https://github.com/bartavelle/hslogstash/commit/663bf8f5e6058b476c9ed9b5c9cf087221b79b36
 [2]
 https://github.com/bartavelle/hslogstash/blob/master/Data/Conduit/Misc.hs
 [3]

 https://github.com/bartavelle/hslogstash/blob/master/examples/RedisToElasticsearch.hs

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet another Conduit question

2013-02-04 Thread Michael Snoyman
Hmm, that's an interesting trick. I can't say that I ever thought bracketP
would be used in that way. The only change I might recommend is using
addCleanup[1] instead, which doesn't introduce the MonadResource constraint.

Michael

[1]
http://haddocks.fpcomplete.com/fp/7.4.2/2012-12-11/conduit/Data-Conduit-Internal.html#v:addCleanup


On Mon, Feb 4, 2013 at 4:37 PM, Kevin Quick qu...@sparq.org wrote:

 While on the subject of conduits and timing, I'm using the following
 conduit to add elapsed timing information:

 timedConduit :: MonadResource m = forall l o u . Pipe l o o u m (u,
 NominalDiffTime)
 timedConduit = bracketP getCurrentTime (\_ - return ()) inner
 where inner st = do r - awaitE
 case r of
   Right x - yield x  inner st
   Left  r - deltaTime st = \t - return (r,t)
   deltaTime st = liftIO $ flip diffUTCTime st $ getCurrentTime

 I'm aware that this is primarily timing the downstream (and ultimately the
 Sink) more than the upstream, and I'm using the bracketP to attempt to
 delay the acquisition of the initial time (st) until the first downstream
 request for data.

 I would appreciate any other insights regarding concerns, issues, or
 oddities that I might encounter with the above.

 Thanks,
   Kevin


 On Mon, 04 Feb 2013 02:25:11 -0700, Michael Snoyman mich...@snoyman.com
 wrote:

  I think this is probably the right approach. However, there's something
 important to point out: flushing based on timing issues must be handled
 *outside* of the conduit functionality, since by design conduit will not
 allow you to (for example) run `await` for up to a certain amount of time.
 You'll probably need to do this outside of your conduit chain, in the
 initial Source. It might look something like this:

 yourSource = do
 mx - timeout somePeriod myAction
 yield $ maybe Flush Chunk mx
 yourSource


 On Sun, Feb 3, 2013 at 5:06 PM, Felipe Almeida Lessa 
 felipe.le...@gmail.com

 wrote:


  I guess you could use the Flush datatype [1] depending on how your
 data is generated.

 Cheers,

 [1]
 http://hackage.haskell.org/**packages/archive/conduit/0.5.**
 4.1/doc/html/Data-Conduit.**html#t:Flushhttp://hackage.haskell.org/packages/archive/conduit/0.5.4.1/doc/html/Data-Conduit.html#t:Flush

 On Fri, Feb 1, 2013 at 6:28 AM, Simon Marechal si...@banquise.net
 wrote:
  On 01/02/2013 08:21, Michael Snoyman wrote:
  So you're saying you want to keep the same grouping that you had
  originally? Or do you want to batch up a certain number of results?
  There are lots of ways of approaching this problem, and the types
 don't
  imply nearly enough to determine what you're hoping to achieve here.
 
  Sorry for not being clear. I would like to group them as much as
  possible, that is up to a certain limit, and also within a time
  threshold. I believe that the conduit code will be called only when
  something happens in the conduit, so an actual timer would be useless
  (unless I handle this at the source perhaps, and propagate ticks).
 
  That is why in my first message I talked about stacking things into the
  list until the conduit has no more input available, or a maximum size
 is
  reached, but was not sure this even made sense.
 
  __**_
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



 --
 Felipe.



 --
 -KQ


 __**_
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] branching conduits

2013-01-31 Thread Michael Snoyman
On Thu, Jan 31, 2013 at 11:48 AM, Simon Marechal si...@banquise.net wrote:

 Hello,

 I have found the Conduit abstraction to be very well suited to a
 set of
 problems I am facing. I am however wondering how to implement
 branching conduits, and even conduit pools.

 I am currently in the process of rewriting parts (the simple
 parts) of
 the Logstash tool. There is a sample program that I use here:


 https://github.com/bartavelle/hslogstash/blob/deprecateUtils/examples/RedisToElasticsearch.hs

 As it can be seen, it uses a Redis source, a conduit that
 decodes the
 JSON ByteString into a LogstashMessage, a conduit that stores it into
 Elasticsearch and outputs the result of that action as an Either, and
 finally a sink that prints the errors.

 My problem is that I would like more complex behaviour. For
 example, I
 would like to route messages to another server instead of putting them
 into Elasticsearch when the LogstashMessage has some tag set. But this
 is just an example, and it is probable I will want much more complex
 behavior soon.

 I am not sure how to proceed from here, but have the following
 ideas:

  * investigate how the Conduits are made internally to see if I can
 create a operator similar to $$, but that would have a signature like:
 Source m (Either a b) - Sink a m r - Sink b m r
 and would do the branching in a binary fashion. I am not sure this is
 even possible.

  * create a mvars connectors constructor, which might have a signature
 like this:

  Int -- ^ branch count
  (LogstashMessage - Int) -- ^ branching function
  (Sink LogstashMessage m (), [Source m LogstashMessage])
  -- ^ a suitable sink, several sources for the other conduits

  it would internally create a MVar (Maybe LogstashMessage) for each
 branch, and put putMVar accordingly to the branching function. When the
 Conduit is destroyed, it will putMVar Nothing in all MVars.
  the sources would takeMVar, check if it is Nothing, or just proceed as
 expected.

  The MVar should guarantee the constant space property, but there is the
 risk of inter branch blocking when one of the branches is significantly
 slower than the others. It doesn't really matter to me anyway. And all
 the branch Sinks would have to have some synchronization mechanism so
 that the main thread waits for them (as they are going to be launched by
 a forkIO).



   This is the simplest scheme I have thought of, and it is probably not
 a very good one. I am very interested in suggestions here.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



Hi Simon,

For your first approach, I think what you're looking to do is combine two
Sinks together, something like:

combine :: Monad m
= Sink i1 m r1
- Sink i2 m r2
- Sink (Either i1 i2) m (r1, r2)

Then you'd be able to use the standard $$ and =$ operators on it. I've put
up an example implementation here[1]. The majority of the code is simple
pattern matching on the different possible combination, but some things to
point out:

* To simplify, we start off with a call to injectLeftovers. This means that
we can entirely ignore the Leftover constructor in the main function.
* Since a Sink will never yield values, we can also ignore the HaveOutput
constructor.
* As soon as either of the Sinks terminates, we terminate the other one as
well and return the results.

You can also consider going the mutable container route if you like.
Instead of creating a lot of stuff from scratch with MVars, you could use
stm-conduit[2]. In fact, that package already contains some kind of merging
behavior for sources, it might make sense to ask the author about including
unmerging behavior for Sinks.

Michael

[1] https://gist.github.com/4682609
[2] http://hackage.haskell.org/package/stm-conduit
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet another Conduit question

2013-01-31 Thread Michael Snoyman
Firstly, what's the use case that you want to deal with lists? If it's for
efficiency, you'd probably be better off using a Vector instead.

But I think the inverse of `concat` is `singleton = Data.Conduit.List.map
return`, or `awaitForever $ yield . return`, using the list instance for
Monad. Your conduitMap could be implemented then as:

conduitMap conduit = concat =$= conduit =$= singleton

Michael


On Thu, Jan 31, 2013 at 5:12 PM, Simon Marechal si...@banquise.net wrote:

 I am working with bulk sources and sinks, that is with a type like:

 Source m [a]
 Sink [a] m ()

 The problem is that I would like to work on individual values in my
 conduit. I can have this:

 concat :: (Monad m) = Conduit [a] m a
 concat = awaitForever (mapM_ yield)

 But how can I do it the other way around ? I suppose it involves pattern
 matching on the different states my conduit might me in. But is that
 even possible to do it in a non blocking way, that is catenate data
 while there is something to read (up to a certain threshold), and send
 it as soon as there is nothing left to read ? Or doesn't that make any
 sense in the context of Conduits (in the sense that this conduit will be
 recheck for input before the upstream conduits will have a chance to
 operate) ?

 Another approach would be to have a map equivalent:

 conduitMap :: Conduit i m o - Conduit [i] m [o]

 But I am not sure how to do this either ...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Yet another Conduit question

2013-01-31 Thread Michael Snoyman
On Fri, Feb 1, 2013 at 8:42 AM, Simon Marechal si...@banquise.net wrote:

 On 02/01/2013 05:21 AM, Michael Snoyman wrote:
  Firstly, what's the use case that you want to deal with lists? If it's
  for efficiency, you'd probably be better off using a Vector instead.

 That is a good point, and I wanted to go that way, but was not sure it
 would help me a lot here. My use case is for services where there is a
 bulk  API, such as Redis pipelining or Elasticsearch bulk inserts. The
 network round-trip gains would exceed by far those from a List to Vector
 conversion.

  But I think the inverse of `concat` is `singleton =
  Data.Conduit.List.map return`, or `awaitForever $ yield . return`, using
  the list instance for Monad. Your conduitMap could be implemented then
 as:
 
  conduitMap conduit = concat =$= conduit =$= singleton

 I can see how to do singleton, but that would gain me ... singletons.
 That means I could not exploit a bulk API.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



So you're saying you want to keep the same grouping that you had
originally? Or do you want to batch up a certain number of results? There
are lots of ways of approaching this problem, and the types don't imply
nearly enough to determine what you're hoping to achieve here.

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to store Fixed data type in the database with persistent ?

2013-01-27 Thread Michael Snoyman
On Jan 27, 2013 8:46 AM, alexander.vershi...@gmail.com wrote:

 Sat, Jan 26, 2013 at 12:21:02PM +0600, s9gf4...@gmail.com wrote
   According to the documentation, SQLite stores whatever you give it,
   paying very little heed to the declared type.  If you get SQLite to
   *compare* two numbers, it will at that point *convert* them to doubles
   in order to carry out the comparison.  This is quite separate from the
   question of what it can store.
 
  CREATE TABLE t1(val);
  sqlite insert into t1 values ('24.24242424')
 ... ;
  sqlite insert into t1 values ('24.24242423')
 ... ;
  sqlite select * from t1 order by val;
  24.24242423
  24.24242424
  sqlite select * from t1 order by val desc;
  24.24242424
  24.24242423
  sqlite select sum(val) from t1;
  48.48484847
 
  it seems Sqlite can work with arbitrary percission data, very good !
  Persistent must have ability to store Fixed.
 

 It's not correct. SQLlite stores any value, but it will use arithmetic
 operations only with double presicion:

 sqlite select val from t1;
 1
 0.01
 0.0001
 0.01
 0.0001
 0.01
 0.0001
 0.01
 0.0001
 0.01

 sqlite select sum(val) from t1;
 1.0101010101

 as you see it has 14 degree.

 Let's check another well known floating point problem:

 sqlilte create table t2 ('val')
 sqlite insert into t2 values ('0.7');
 sqlite update t2 set val = 11*val-7;

 t2 should remain a const
 sqlite update t2 set val = 11*val-7; -- 4 times
 sqlite select val from t2;
 0.6989597
 sqlite update t2 set val = 11*val-7; -- 10 times mote
 sqlite select val from t2;
 0.430171514341321

 As you see you have errors. So SQLlite doesn't support arbitrary
 presision values.

 As for me Persistent should at least support a Money type and use
 correct backend-specific type for them, either a native for big integer.

Let me clarify a bit:

1. Persistent will currently allow you to create a `Money` datatype which
internally stores as an integer.

2. What Persistent currently lacks is a PersistValue constructor for
arbitrary-precision values. As a result, during marshaling, some data will
be lost when converting from NUMERIC to Double.

3. The upcoming change we're discussing for Persistent would just be to add
such a constructor. We could theoretically provide some extra PersistField
instances as well, but that's not really what's being discussed.

HTH,

Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to store Fixed data type in the database with persistent ?

2013-01-26 Thread Michael Snoyman
Very nice to see, I'm happy to stand corrected here. We'll definitely get
some support for fixed into the next major release.

On Saturday, January 26, 2013, wrote:

  According to the documentation, SQLite stores whatever you give it,
  paying very little heed to the declared type.  If you get SQLite to
  *compare* two numbers, it will at that point *convert* them to doubles
  in order to carry out the comparison.  This is quite separate from the
  question of what it can store.

 CREATE TABLE t1(val);
 sqlite insert into t1 values ('24.24242424')
... ;
 sqlite insert into t1 values ('24.24242423')
... ;
 sqlite select * from t1 order by val;
 24.24242423
 24.24242424
 sqlite select * from t1 order by val desc;
 24.24242424
 24.24242423
 sqlite select sum(val) from t1;
 48.48484847

 it seems Sqlite can work with arbitrary percission data, very good !
 Persistent must have ability to store Fixed.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org javascript:;
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to store Fixed data type in the database with persistent ?

2013-01-25 Thread Michael Snoyman
I can point you to the line of code causing you trouble[1].

The problem is, as you already pointed out, that we don't have a
PersistValue constructor that fits this case correctly. I think the right
solution is to go ahead and add such a constructor for the next release.
I've opened a ticket on Github[2] to track this.

By the way, not all databases supported by Persistent have the ability to
represent NUMERIC with perfect precision. I'm fairly certain the SQLite
will just cast to 8-byte reals, though it's possible that it will keep the
data as strings in some circumstances.

In the short term, you can probably get this to work today by turning your
Fixed values into Integers (by multiplying by some power of 10) to
marshaling to the database, and do the reverse when coming from the
database. I haven't used this technique myself, but I think it should work.

Michael

[1]
https://github.com/yesodweb/persistent/blob/master/persistent-postgresql/Database/Persist/Postgresql.hs#L271
[2] https://github.com/yesodweb/yesod/issues/493


On Fri, Jan 25, 2013 at 8:19 AM, s9gf4...@gmail.com wrote:

 **

 All modern databases has field type NUMERIC(x, y) with arbitrary precision.



 I need to store financial data with absolute accuracy, and I decided to
 use Fixed.

 How can I store Fixed data type as NUMERIC ? I decided to use Snoyman's
 persistent, bit persistent can not use it from the box and there is a
 problem with custom field declaration.



 Here is the instance of PersistField for Fixed I wrote



 instance (HasResolution a) = PersistField (Fixed a) where

 toPersistValue a = PersistText $ T.pack $ show a

 -- fromPersistValue (PersistDouble d) = Right $ fromRational $ toRational d

 fromPersistValue (PersistText d) = case reads dpt of

 [(a, )] - Right a

 _ - Left $ T.pack $ Could not read value  ++ dpt ++  as fixed value

 where dpt = T.unpack d



 fromPersistValue a = Left $ T.append Unexpected data value can not be
 converted to Fixed:  $ T.pack $ show a



 sqlType a = SqlOther $ T.pack $ NUMERIC( ++ (show l) ++ , ++ (show p)
 ++ )

 where

 p = round $ (log $ fromIntegral $ resolution a) / (log 10)

 l = p + 15 -- FIXME: this is maybe not very good

 isNullable _ = False



 I did not found any proper PersistValue to convert into Fixed from. As
 well as converting Fixed to PersistValue is just a converting to string.
 Anyway the saving works properly, but thre reading does not - it just reads
 Doubles with rounding error.



 If you uncomment the commented string in instance you will see, that
 accuracy is not absolute.



 Here is test project to demonstrate the problem.



 https://github.com/s9gf4ult/xres



 If you launch main you will see that precission is not very good because
 of converting database value to Double and then converting to Fixed.



 How can i solve this with persistent or what other framework works well
 with NUMERIC database field type ?



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] why no replace function in our regular expression libs ?

2013-01-25 Thread Simon Michael
People have put a lot of work into regular expression libraries on
haskell. Yet it seems very few of them provide a replace/substitute
function - just regex-compat and regepr as far as I know. Why is that ?
#haskell says:

sclv iirc its because that's a really mutatey operation in the
underlying c libs
sclv should be simple enough to write a general purpose wrapper layer
that uses captures to create the effect

Secondly, as of today what do y'all do when you need that functionality
?

-Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   6   7   8   9   10   >