Re: [Haskell-cafe] Proposal: Partitionable goes somewhere + containers instances

2013-09-29 Thread Mario Blažević

On 09/29/13 08:20, Edward Kmett wrote:
I don't know that it belongs in the standard libraries, but there 
could definitely be a package for something similar.


ConstraintKinds are a pretty hefty extension to throw at it, and the 
signature written there prevents it from being used on ByteString, 
Text, etc.


This can be implemented with much lighter weight types though!
class Partitionable t where
 partition  ::  Int  -  t  -  [t]


I'm not sure why I don't already have this method in the 
FactorialMonoid class, but I'll happily add it if anybody wants it. 
Probably under the name splitEvery, since I already have splitAt.


I'm not sure this is actually the best answer to Ryan's original 
plea, because I thought the idea was to let the original monoid split 
itself in an optimal way, which would preferably be an O(1) operation. 
Then again, this could be overly optimistic. For example, Map is defined as


data Map k a = Bin {-# UNPACK #-} !Size !k a !(Map k a) !(Map k a)
| Tip

so the simple O(1) split would produce three submaps, the middle one 
having only one element. This operation would not be very 
parallelization-friendly.


That is not particularly surprising, since parallelization was not 
the main concern when Data.Map (or containers) was originally written. 
The main goal, as it should have been, was optimizing the containers for 
sequential execution speed. A containers-like package optimized for easy 
and efficient parallelization would have to be written almost from scratch.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-13 Thread Mario Blažević

On 09/13/13 01:51, Michael Snoyman wrote:
On Fri, Sep 13, 2013 at 5:38 AM, Mario Blažević blama...@acanac.net 
mailto:blama...@acanac.net wrote:


On 09/11/13 19:37, John Lato wrote:


3.  I'm not entirely sure that the length* functions belong
here.  I
understand why, and I think it's sensible reasoning, and I
don't have a
good argument against it, but I just don't like it.  With
those, and
mapM_-like functions, it seems that the foldable class is
halfway to
being another monolithic ListLike.  But I don't have any
better ideas
either.


If monolithic classes bother you, my monoid-subclasses
package manages to break down the functionality into several
classes. One big difference is that everything is based off Monoid
rather than Foldable, and that has some big effects on the interface.



I'd point out what I'd consider a bigger difference: the type 
signatures have changed in a significant way. With MonoFoldable, 
folding on a ByteString would be:


(Word8 - b - b) - b - ByteString - b

With monoid-subclasses, you get:

(ByteString - b - b) - b - ByteString - b

There's certainly a performance issue to discuss, but I'm more worried 
about semantics. Word8 tells me something very specific: I have one, 
and precisely one, octet. ByteString tells me I have anywhere from 0 
to 2^32 or 2^64  octets. Yes, we know from context that it will always 
be of size one, but the type system can't enforce that invariant.


All true, but we can also use this generalization to our advantage. 
For example, the same monoid-subclasses package provides ByteStringUTF8, 
a newtype wrapper around ByteString. It behaves the same as the plain 
ByteString except its atomic factors are not of size 1, instead it folds 
on UTF-8 encoded character boundaries. You can't represent that in 
Haskell's type system.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-13 Thread Mario Blažević

On 09/13/13 02:28, Michael Snoyman wrote:




On Fri, Sep 13, 2013 at 9:18 AM, Mario Blažević blama...@acanac.net 
mailto:blama...@acanac.net wrote:


On 09/13/13 01:51, Michael Snoyman wrote:

On Fri, Sep 13, 2013 at 5:38 AM, Mario Blažević
blama...@acanac.net mailto:blama...@acanac.net
mailto:blama...@acanac.net mailto:blama...@acanac.net wrote:

On 09/11/13 19:37, John Lato wrote:


3.  I'm not entirely sure that the length* functions
belong
here.  I
understand why, and I think it's sensible reasoning, and I
don't have a
good argument against it, but I just don't like it.  With
those, and
mapM_-like functions, it seems that the foldable class is
halfway to
being another monolithic ListLike.  But I don't have any
better ideas
either.


If monolithic classes bother you, my monoid-subclasses
package manages to break down the functionality into several
classes. One big difference is that everything is based
off Monoid
rather than Foldable, and that has some big effects on the
interface.



I'd point out what I'd consider a bigger difference: the type
signatures have changed in a significant way. With
MonoFoldable, folding on a ByteString would be:

(Word8 - b - b) - b - ByteString - b

With monoid-subclasses, you get:

(ByteString - b - b) - b - ByteString - b

There's certainly a performance issue to discuss, but I'm more
worried about semantics. Word8 tells me something very
specific: I have one, and precisely one, octet. ByteString
tells me I have anywhere from 0 to 2^32 or 2^64  octets. Yes,
we know from context that it will always be of size one, but
the type system can't enforce that invariant.


All true, but we can also use this generalization to our
advantage. For example, the same monoid-subclasses package
provides ByteStringUTF8, a newtype wrapper around ByteString. It
behaves the same as the plain ByteString except its atomic factors
are not of size 1, instead it folds on UTF-8 encoded character
boundaries. You can't represent that in Haskell's type system.



I can think of two different ways of achieving this approach with 
MonoFoldable instead: by setting `Element` to either `Char` or 
`ByteStringUTF8`. The two approaches would look like:


newtype ByteStringUTF8A = ByteStringUTF8A S.ByteString
type instance Element ByteStringUTF8A = Char
instance MonoFoldable ByteStringUTF8A where
ofoldr f b (ByteStringUTF8A bs) = ofoldr f b (decodeUtf8 bs)
ofoldl' = undefined

newtype ByteStringUTF8B = ByteStringUTF8B S.ByteString
type instance Element ByteStringUTF8B = ByteStringUTF8B
instance MonoFoldable ByteStringUTF8B where
ofoldr f b (ByteStringUTF8B bs) = ofoldr (f . ByteStringUTF8B . 
encodeUtf8 . T.singleton) b (decodeUtf8 bs)

ofoldl' = undefined

I'd personally prefer the first approach, as that gives the right 
guarantees at the type level: each time the function is called, it 
will be provided with precisely one character. I believe the second 
approach provides the same behavior as monoid-subclasses does right now.




Right now monoid-subclasses actually provides both approaches. 
You're correct that it provides the second one as instance 
FactorialMonoid ByteStringUTF8, but it also provides the former as 
instance TextualMonoid ByteStringUTF8. The TextualMonoid class is 
basically what you'd get if you restricted MonoFoldable to type 
Elem=Char. I wanted to keep the package extension-free, you see.


My main point is that it's worth considering basing MonoFoldable on 
FactorialMonoid, because it can be considered its specialization. 
Methods like length, take, or reverse, which never mention the item type 
in their signature, can be inherited from the FactorialMonoid superclass 
with no change whatsoever. Other methods would differ in their 
signatures (and performance), but the semantics would carry over.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monomorphic containers, Functor/Foldable/Traversable WAS: mapM_ for bytestring

2013-09-12 Thread Mario Blažević

On 09/11/13 19:37, John Lato wrote:

I didn't see this message and replied privately to Michael earlier, so
I'm replicating my comments here.

1.  Sooner or later I expect you'll want something like this:

class LooseMap c el el' where


lMap :: (el - el') - c el - c el'

It covers the case of things like hashmaps/unboxed vectors that have
class constraints on elements.  Although maybe LooseFunctor or LFunctor
is a better name.

Probably something similar for Traversable would be good also, as would
a default instance in terms of Functor.

2.  IMHO cMapM_ (and related) should be part of the Foldable class.
This is entirely for performance reasons, but there's no downside since
you can just provide a default instance.

3.  I'm not entirely sure that the length* functions belong here.  I
understand why, and I think it's sensible reasoning, and I don't have a
good argument against it, but I just don't like it.  With those, and
mapM_-like functions, it seems that the foldable class is halfway to
being another monolithic ListLike.  But I don't have any better ideas
either.


	If monolithic classes bother you, my monoid-subclasses package manages 
to break down the functionality into several classes. One big difference 
is that everything is based off Monoid rather than Foldable, and that 
has some big effects on the interface.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Traversals of monomorphic containers

2013-09-03 Thread Mario Blažević

On 09/02/13 06:53, Nicolas Trangez wrote:

# Redirected to haskell-cafe

On Sun, 2013-09-01 at 14:58 +0400, Artyom Kazak wrote:

Would this be an appropriate place to propose adding mapM_ (and then
possibly mapM) to bytestring library?

Was it suggested before? If yes, why was it rejected?


This got me wondering: there are several type-classes useful for
polymorphic container types, e.g. Functor, Foldable  Traversable which
all apply to some type of kind (* - *).

Are there related things for monomorphic containers, like ByteString,
Text or some newtype'd Vector with fixed element type, e.g.

class MFunctor f a where
 mfmap :: (a - a) - f - f

instance MFunctor ByteString Word8 where
 mfmap = ByteString.map



	I'm not aware of this particular class, but I have considered it. In 
the end I've chosen to generalize the class to FactorialMonoid instead:


class Monoid m = FactorialMonoid m where
   ...
   foldMap :: Monoid n = (m → n) → m → n

	ByteString and Text are instances of the class, and so are lists, maps, 
and other containers, and Sum and Product as well.



http://hackage.haskell.org/packages/archive/monoid-subclasses/0.3.2/doc/html/Data-Monoid-Factorial.html





or (maybe even better)

class MFunctor f where
 type Elem
 mfmap :: (Elem - Elem) - f - f

instance MFunctor ByteString where
 type Elem = Word8
 mfmap = ByteString.map

and similar for other classes.

Nicolas


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monoids induced by Applicative/Alternative/Monad/MonadPlus?

2013-08-23 Thread Mario Blažević

On 13-08-22 04:04 PM, Petr Pudlák wrote:

Or, if there are no such definitions, where would be a good place to add
them?


	If they are to be added to the base libraries, the Data.Monoid module 
would be my choice.


	I did wish I had the AppMonoid instance on several occasions, when 
using various parser combinator libraries that don't support this 
reasonable instance of Monoid:


   numericLiteral = optionalMonoid (string + | string -)
 some digit
 optionalMonoid (string .  some digit)


	The problem is, the AppMonoid newtype would not help in that situation 
unless it also implemented Applicative and Alternative class, as well as 
the parsing primitives. Without the latter, the above code would look 
like this:



   numericLiteral = optionalMonoid
(AppMonoid (string + | string -))
 some (AppMonoid digit)
 optionalMonoid (AppMonoid (string .)
some (AppMonoid digit))


	The point of the above is that I don't think there is enough 
justification for these newtypes. The Applicative and Alternative 
instances are typically used because of the primitives they come with, 
and newtype wrappings like AppMonoid and AltMonoid can't support those 
easily. Unless ekmett adds the appropriate instances to his parsers 
package, they would be too clumsy to use.





Petr

Dne 08/20/2013 06:55 PM, Petr Pudlák napsal(a):


Dear Haskellers,

are these monoids defined somewhere?

|import  Control.Applicative
import  Data.Monoid

newtype  AppMonoid  m a =AppMonoid  (m  a)
instance  (Monoid  a,Applicative  m) =Monoid  (AppMonoid  m a)where
 mempty =AppMonoid  $ pure mempty
 mappend (AppMonoid  x) (AppMonoid  y) =AppMonoid  $ mappend $ x * y
-- With the () monoid for `a` this becames the monoid of effects.

newtype  AltMonoid  m a =AltMonoid  (m  a)
instance  Alternative  m =Monoid  (AltMonoid  m a)where
 mempty =AltMonoid  empty
 mappend (AltMonoid  x) (AltMonoid  y) =AltMonoid  $ x | y|

(and similarly for Monad/MonadPlus, until they become subclasses of
Applicative?)

Best regards,
Petr





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] monoids induced by Applicative/Alternative/Monad/MonadPlus?

2013-08-23 Thread Mario Blažević

See also this thread from two years ago:

http://www.haskell.org/pipermail/haskell-cafe/2011-June/091294.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: monoid-subclasses-0.1.1, incremental-parser-0.2.2, and SCC-0.8

2013-03-21 Thread Mario Blažević
The new package monoid-subclasses [1] exports a number of classes 
that sit between monoids and groups: ReductiveMonoid, 
CancellativeMonoid, GCDMonoid, MonoidNull, and FactorialMonoid among 
others. The package also comes with class instances for all applicable 
data types from base, vector, containers, bytestring, and text packages.


These classes were initially a part of the incremental-parser 
package [2] but they've moved, so incremental-parser-0.2.2 now depends 
on monoid-subclasses which has added more class methods and instances. 
The parsers constructed using the incremental-parser library generally 
work on any ReductiveMonoid, FactorialMonoid or TextualMonoid. You can 
thus construct a generic parser that operates on String, ByteString, or 
Text input, or even on Product Integer or a Map if you are so inclined.


The version 0.8 of Streaming Component Combinators (a.k.a. SCC) has 
also been re-designed from the ground up. In the earlier versions, as in 
the pipes and conduit packages, the types of streams and 
stream-processing components were
parametrized by the type of the individual items flowing through. All 
sources and sinks are now parametrized instead by the type of the monoid 
they produce or consume. As a consequence, SCC coroutines can now 
communicate in chunks of
ByteString or Text instead of just lists, and these chunks can be of 
arbitrary size. The boundaries of producer's chunks are invisible to the 
consumer (excepting some low-level functions), which must specify the 
amount of data it wants to receive using a ReadRequest. This change at 
the heart of SCC has had a positive impact on outer layers, and 
especially on the Splitter component type which has become much simpler.


[1] http://hackage.haskell.org/package/monoid-subclasses-0.1.1
[2] http://hackage.haskell.org/package/incremental-parser-0.2.2
[3] http://hackage.haskell.org/package/scc-0.8


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Open-source projects for beginning Haskell students?

2013-03-15 Thread Mario Blažević

On 13-03-11 10:52 PM, Michael Orlitzky wrote:

On 03/11/2013 11:48 AM, Brent Yorgey wrote:


So I'd like to do it again this time around, and am looking for
particular projects I can suggest to them.  Do you have an open-source
project with a few well-specified tasks that a relative beginner (see
below) could reasonably make a contribution towards in the space of
about four weeks? I'm aware that most tasks don't fit that profile,
but even complex projects usually have a few simple-ish tasks that
haven't yet been done just because no one has gotten around to it
yet.


It's not exciting, but adding doctest suites with examples to existing
packages would be a great help.

   * Good return on investment.

   * Not too hard.

   * The project is complete when you stop typing.



	In the similar spirit, many existing projects would benefit from a 
benchmark suite. It's a fairly simple but somewhat tedious process, good 
for a team work practice:


1. Take a well-defined task, like parsing JSON for example.
2. Devise a test scenario that includes the task.
3. Make a list of all libraries on Hackage which (claim to) do the task.
4. Write a simple test for each of the libraries.
5. Combine all the tests into a Criterion test suite.
6. Publish the test suite and the benchmark results.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Suggestiong for inter-thread communication

2013-01-28 Thread Mario Blažević

On 13-01-26 05:28 AM, Erik de Castro Lopo wrote:

Thiago Negri wrote:


Do you need advice on what? I didn't understand your last phrase.


Well I have data from two sources, stdin and the calculation
thread. If I was doing this in C, I'd probably use a pipe for the
calculation data and then do select on the two file descriptors.

There is a select package:

 http://hackage.haskell.org/package/select

but I was wondering if there was a more idiomatic Haskell way
of dealing with inputs from more than one source.



	There are list arrows, and also coroutines in many guises including 
pipes, conduits, and iteratees. These are all co-operative concurrency, 
however, and I can't tell if your problem requires pre-emptive threads.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Call for discussion: OverloadedLists extension

2012-09-28 Thread Mario Blažević

On 12-09-26 08:07 PM, wren ng thornton wrote:

On 9/25/12 1:57 PM, Sjoerd Visscher wrote:

Maybe we could make a literal [a,b,c] turn into
unpack [a,b,c]#
where
[a,b,c]#
is a statically-allocated vector?


I'm kinda surprised this isn't already being done. Just doing this seems
like it'd be a good undertaking, regardless of whether we get overloaded
list literals. Just storing the literal as a C-like array and inflating
it to a list/array/vector at runtime seems like it should be a big win
for code that uses a lot of literals.


Why?

	I'm surprised that this is an issue at all. If list literals you are 
talking about are constant, wouldn't GHC apply constant folding and 
construct the list only the first time it's needed?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Fmark markup language

2012-09-19 Thread Mario Blažević

On 12-09-18 07:37 PM, Richard O'Keefe wrote:


On 19/09/2012, at 1:43 AM, Stefan Monnier wrote:


The problem with that is that some people DO end some headings with
a full stop; for them your special syntax is not natural.


Markdown/ReST is already using the no syntax idea (e.g. compared to
pre-wiki markup such a LaTeX or Texinfo), so he's simply trying to push
this idea further.


Markdown is very heavy on syntax,
what it is *light* on is specification of what the
syntax actually is.  As a result,
I'm aware of three different dialects,
and someone told me about having to reverse
engineer the syntax from a Perl implementation.
As a further result, I cannot write a program to
reliably *generate* Markdown.


	Very true. Sadly, this is the case with almost all other Wiki-like 
markup schemes out there. They are all implementation-specified. The 
only exception I'm aware of is Creole, for which an EBNF grammar exists, 
even if it's pretty nasty-looking. A look at that specification makes 
one appreciate how badly specified natural syntax is.


	In my opinion, there is no single natural syntax that can be imposed on 
ASCII strings and serve majority of uses. There are many different 
syntaxes that feel  natural for different uses and different users, and 
the best we can hope to achieve would be a way to provide a formal and 
readable specification for each of those syntaxes. I've been playing 
with one approach in this direction with the concrete-relaxng-parser 
package, but it's still early days.





I suspect it'll be difficult.


Oh, more power to him for trying.
I just don't think it can be pushed very far.

Oh, there is a really *filthy* hack that could be pulled
for italics, bold face, and so on.  Contrary to its original
principles, Unicode includes several copies of ASCII
(see http://unicode.org/charts/PDF/U1D400.pdf):
Mathematical bold,
Mathematical italic,
Mathematical bold italic,
Mathematical script,
Mathematical bold script,
Mathematical fraktur,
Mathematical double struck (blackboard-bold),
Mathematical bold fraktur,
Mathematical sans-serif,
Mathematical sans-serif bold,
Mathematical sans-serif italic,
Mathematical sans-serif bold italic,
Mathematical monospace,
and some similar sets of Greek.


Thank you for sharing this hack. It's very amusing.




--
Mario Blazevic
mblaze...@stilo.com
Stilo International

This message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure, copying, or
distribution is strictly prohibited. If you are not the intended
recipient(s) please contact the sender by reply email and destroy
all copies of the original message and any attachments.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Pipes] Can pipes solve this problem? How?

2012-08-16 Thread Mario Blažević

On 12-08-15 02:54 PM, Daniel Hlynskyi wrote:

Hello Cafe.
Consider code, that takes input from handle until special substring 
matched:


 matchInf a res s | a `isPrefixOf` s = reverse res
 matchInf a res (c:cs)   = matchInf a (c:res) cs
 hTakeWhileNotFound str hdl = hGetContents hdl = return.matchInf str []

So, the question is - can pipes (any package of them) be the Holy 
Grail in this situation, to both keep simple code and better deal with 
handles (do not close them specifically)? How?


It's more complex than Pipes, but SCC gives you what you need. If you 
cabal install it, you have the choice of using the shsh executable on 
the command line to accomplish your task:


$ shsh -c 'cat input-file.txt | select prefix (! substring search 
string)'


or using the equivalent library combinators from Haskell code:

 import System.IO (Handle, stdin)
 import Control.Monad.Coroutine (runCoroutine)
 import Control.Concurrent.SCC.Sequential

 pipeline :: String - Handle - Producer IO Char ()
 pipeline str hdl = fromHandle hdl - select (prefix $ sNot $ 
substring str)


 hTakeWhileNotFound :: String - Handle - IO String
 hTakeWhileNotFound str hdl =
fmap snd $ runCoroutine $ pipe (produce $ pipeline str hdl) 
(consume toList)


 main = hTakeWhileNotFound up to here stdin = putStrLn



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parallel cooperative multithreading?

2012-05-23 Thread Mario Blažević

On 12-05-22 09:55 AM, Benjamin Ylvisaker wrote:

Has anyone ever worked on implementing something like this in Haskell?

http://www.cs.hmc.edu/~stone/papers/ocm-unpublished.pdf

The outline of the idea:

- Concurrent programming is really hard with the popular frameworks
today.
- For most purposes parallel programming is hard, in some part because
it requires concurrent programming.  Of course there are attempts to do
non-concurrent parallel programming, but I hope it's not too
controversial to say that such frameworks are still on the fringe.
- Cooperative concurrency is way easier than preemptive concurrency
because between invocations of pause/yield/wait, sequential reasoning
works.
- Historically, cooperative concurrency only worked on a single
processors, because running threads in parallel would break the
atomicity of sequential blocks (between invocations of p/y/w).
- Researchers have been poring tons of effort into efficiently running
blocks of code atomically.
- Hey, we can do parallel cooperative multithreading!

The paper discusses implementations in Lua, C++ and C, but I think
Haskell could be an awesome substrate for such a framework.  Has anyone
thought about this?


I have, which is why the monad-coroutine library comes with support 
for running multiple coroutines in parallel, meaning that their steps 
are run in parallel rather than interleaved. Unfortunately, I never 
managed to extract any actual speedup out of this feature in my tests.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mixing Unboxed Mutable Vectors and Parsers

2012-04-10 Thread Mario Blažević

On 12-04-07 05:35 PM, Myles C. Maxfield wrote:

So here are my questions:
...
3. Are there any parsers that support streaming semantics and being
used as a monad transformer? This would require rewriting my whole
program to use this new parser, but if that's what I have to do, then
so be it.


	 Have a look at the incremental-parser package. It's not a monad 
transformer, only a monad, but it's written with streaming in mind. In 
particular, it solves the problem of mismatch between the input chunk 
boundaries and the boundaries of the structures you're trying to parse.


	The current version supports ByteString, Text, and list inputs out of 
the box, but support for Vector and arrays can be added as outside 
instances.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Mario Blažević

On 12-03-11 09:09 AM, Paolo Capriotti wrote:

The Category law would be broken, though:

unawait x  id == yield x !== unawait x

How did you get this equation? It's not even well-typed:

unawait :: a -  Pipe a b m ()
yield :: b -  Pipe a b m ()


You're right, it's completely wrong. I was confused last night.



Someone actually implemented a variation of Pipes with unawait:
https://github.com/duairc/pipes/blob/master/src/Control/Pipe/Common.hs
(it's called 'unuse' there).

I actually agree that it might break associativity or identity, but I
don't have a counterexample in mind yet.


It's difficult to say without having the implementation of both 
unawait and all the combinators in one package. I'll assume the 
following equations hold:


   unawait x  await = return x
   unawait x  yield y = yield y  unawait x
   (p1  unawait x)  p2 = (p1  p2) * unawait x   -- this 
one tripped me up

   first (unawait (x, y)) = unawait x

The first two equations would let us move all the unawaits that are 
not immediately re-awaited to the end of their monadic pipeline stage: 
the unawait can always be performed as the last operation in bulk. The 
third equation let allows us to move these unawaits to the end of the 
pipeline.


If these equations hold, unawait now appears to be law-abiding to 
me. I apologize for my unsubstantiated smears.


The 'unuse' implementation you linked to drops the unmatched Unuse 
suspensions, so it doesn't follow the third equation.



go i True u (Free (Unuse a d)) = go i True u d
go True o (Free (Unuse a u)) d@(Free (Await _)) = go True o u d


I think this implemanteation does break the Category law, but I'm 
having trouble testing it in GHCi.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Mario Blažević

On 12-03-11 12:39 PM, Chris Smith wrote:

On Sun, Mar 11, 2012 at 10:30 AM, Mario Blaževićblama...@acanac.net  wrote:

   (p1  unawait x)  p2 = (p1  p2)* unawait x   -- this one
tripped me up

I don't think this could reasonably hold.  For example, you'd expect
that for any p, idP  p == idP since idP never terminates at all.
But then let p1 == idP, and you get something silly.  The issue is
with early termination: if p2 terminates first in the left hand side,
you don't want the unawait to occur.


No, idP does terminate once it consumes its input. Your idP  p 
first reproduces the complete input, and then runs p with empty input.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-11 Thread Mario Blažević

On 12-03-11 01:36 PM, Chris Smith wrote:

On Sun, Mar 11, 2012 at 11:22 AM, Mario Blaževićblama...@acanac.net  wrote:

No, idP does terminate once it consumes its input. Your idP  p first
reproduces the complete input, and then runs p with empty input.

This is just not true.  idP consumes input forever, and (idP  p) =
idP, for all pipes p.

If it is composed with another pipe that terminates, then yes, the
*composite* pipe can terminate, so for example ((q+  idP)  p) may
actually do something with p.  But to get that effect, you need to
compose before the monadic bind... so for example (q+  (idP  p)) =
(q+  idP) = q.  Yes, q can be exhausted, but when it is, idP will
await input, which will immediately terminate the (idP  p) pipe,
producing the result from q, and ignoring p entirely.


Sorry. I was describing the way it's done in SCC, and I assumed 
that pipes and pipes-core behaved the same. But GHCi says you're right:


 :{
| runPipe ((fromList [1, 2, 3]  return [])
| + (idP  fromList [4, 5]  return [])
| + consume)
| :}
[1,2,3]


May I enquire what was the reason for the non-termination of idP? 
Why was it not defined as 'forP yield' instead? The following command 
runs the way I expected.


 :{
| runPipe ((fromList [1, 2, 3]  return [])
| + (forP yield  fromList [4, 5]  return [])
| + consume)
| :}
[1,2,3,4,5]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-10 Thread Mario Blažević

On 12-03-10 05:16 AM, Paolo Capriotti wrote:

On Sat, Mar 10, 2012 at 4:21 AM, Mario Blaževićblama...@acanac.net  wrote:

I like your design, it seems to strike a good balance between elegance
and practicality. The only thing missing for the latter is a deeper support
for chunking. Of course, that would probably destroy some of the elegance
[1]. I don't think that problem has been solved in any of the
enumerator/iteratee/pipe/wire/conduit libraries so far.

Chunking is supported but not by primitive constructs. The way you
implement chunked streams is to simply use some form of container
representing a chunk as your input/output type.

Of course, that means that the abstraction is now operating at the
level of chunks instead of elements, which may be inconvenient, but I
doubt that there exists a way to lift element operations to chunks
in an efficient and general way.

Another issue is how to deal with unconsumed input. For that, there is
a ChunkPipe type (in pipes-extra) with a specialized monad instance
that threads unconsumed input along. You can see an example of
ChunkPipe in action in this prototype http server by Jeremy Shaw:
http://src.seereason.com/pipes-http-2/pipes-http-2/. Note that this is
based on a old version of pipes-core, however.


The only sane way I've found to deal with chunks is to move the 
responsibility into the glue logic, which would be your (+) and () 
combinators. The upstream argument of (+) could then produce chunks of 
any size it finds suitable, while the downstream argument would specify 
exactly how much input it needs without having to worry about the 
upstream chunk boundaries. How these requests are phrased is an open 
question. I've developed the incremental-parser package specifically for 
this purpose. Other approaches are possible, but I'm convinced that 
chunking should not be left to individual components. The chunk type 
probably shouldn't be reflected even in their types.




Did you consider adding some stream-splitting and merging pipes, like
those in the SCC package [2] or those described in the last Monad.Reader
issue [3]? Your arrow-like combinators seem well thought out, but they
don't go very far.

I'm not sure why you say that they don't go very far. I looked at
Splitter and Join in Monad.Reader 19, and they seem equivalent to
'splitP' and 'joinP' in pipes-core.



The main purpose of the Splitter type is to act as a conditional, 
sending each input item *either* into the left or into the right output, 
marking is as either true or false. Your splitterP sends each item to 
*both* left and right output. It's a tee, not a split.


If your point is that the Splitter a m r type is isomorphic to Pipe 
a (Either a a) m r, that is true. There is a benefit to the abstraction, 
though. Once you introduce chunking into the picture, however, the 
Splitter type can be changed under the hood to send an entire chunk to 
its left or right output. The corresponding efficient chunked Pipe type 
would be Pipe [a] (Either [a] [a]) m r, which is not at all the same as 
Pipe [a] [Either a a] m r -- if you have to pack every single item of 
the input chunk into an Either value, you've lost all performance 
benefits of chunking. The former type is efficient but I'm not sure if 
it would allow you to abstract the chunking logic out of the individual 
components.



There shouldn't be any problem implementing all the other combinators
there in terms of monoidal primitives (e.g. 'not' is just 'swap').


I agree, with the chunking reservation above. Anyway, consider 
adding the Boolean combinators to the library. I find them quite intuitive.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-10 Thread Mario Blažević

On 12-03-10 05:19 PM, Twan van Laarhoven wrote:

On 2012-03-10 11:16, Paolo Capriotti wrote:

Another issue is how to deal with unconsumed input. For that, there is
a ChunkPipe type (in pipes-extra) with a specialized monad instance
that threads unconsumed input along. You can see an example of
ChunkPipe in action in this prototype http server by Jeremy Shaw:
http://src.seereason.com/pipes-http-2/pipes-http-2/. Note that this is
based on a old version of pipes-core, however.


A nice way to deal with unconsumed input (from a user's perspective) 
would be a function


-- | Pass some unconsumed input back upstream.
--   The next @await@ will return this input without blocking.
unawait :: Monad m = a - Pipe a b m ()


The function may be called unawait, but there's nothing stopping 
you from inserting something into the stream that wasn't in the input to 
start with. I find that this approach breaks too many invariants.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-10 Thread Mario Blažević

On 12-03-10 09:05 PM, Twan van Laarhoven wrote:

On 2012-03-11 00:09, Mario Blažević wrote:

On 12-03-10 05:19 PM, Twan van Laarhoven wrote:

-- | Pass some unconsumed input back upstream.
-- The next @await@ will return this input without blocking.
unawait :: Monad m = a - Pipe a b m ()


The function may be called unawait, but there's nothing stopping you 
from
inserting something into the stream that wasn't in the input to start 
with. I

find that this approach breaks too many invariants.


Which invariants does it break exactly? I.e. what properties do you 
expect to hold that fail when you can push arbitrary values back 
up-stream?


Are you asking for a written-up set of Pipe laws? I'm not aware of 
any, and I'd love to see one.

The Category law would be broken, though:

unawait x  id == yield x !== unawait x

I suppose the additional Arrow laws, if they were transcribed to 
the pseudo-Arrow operations that Pipe supports, would be broken as well.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.0.1

2012-03-09 Thread Mario Blažević

On 12-03-09 07:36 PM, Paolo Capriotti wrote:

I'm pleased to announce the release of version 0.0.1 of pipes-core, a
library for efficient, safe and compositional IO, similar in scope to
iteratees and conduits.


I like your design, it seems to strike a good balance between 
elegance and practicality. The only thing missing for the latter is a 
deeper support for chunking. Of course, that would probably destroy some 
of the elegance [1]. I don't think that problem has been solved in any 
of the enumerator/iteratee/pipe/wire/conduit libraries so far.


Did you consider adding some stream-splitting and merging pipes, 
like those in the SCC package [2] or those described in the last 
Monad.Reader issue [3]? Your arrow-like combinators seem well thought 
out, but they don't go very far.



[1] http://www.haskell.org/pipermail/haskell-cafe/2010-August/082540.html
[2] 
http://hackage.haskell.org/packages/archive/scc/0.7.1/doc/html/Control-Concurrent-SCC-Sequential.html#g:24

[3] http://themonadreader.files.wordpress.com/2011/10/issue19.pdf


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to increase performance using concurrency for sequential producer-consumer problem

2012-02-13 Thread Mario Blažević

On 12-02-13 10:12 AM, Roel van Dijk wrote:

Hello,

I have a program which I believe can benefit from concurrency. But I
am wondering if the mechanisms I need already exist somewhere on
Hackage.


You can try monad-coroutine. Here is an incomplete transcription of 
your code:


import Control.Monad
import Control.Monad.Coroutine
import Control.Monad.Coroutine.SuspensionFunctors
import Control.Monad.Trans.Class

type Producer a = Coroutine (Yield a) IO ()
type Converter a b = Coroutine (EitherFunctor (Await a) (Yield b)) IO ()
type Consumer b = Coroutine (Await b) IO ()

producer :: Producer Int
producer = forM_ [1..10] yield

converter :: Converter Int Int
converter =  forever $ do a - mapSuspension LeftF await
  mapSuspension RightF $ yield (10 * a)

consumer :: Consumer Int
consumer = forever (await = lift . print)

main = seesaw parallelBinder awaitYieldResolver consumer producer

The main function is incorrect because it's not using the 
converter. I didn't add that because there's no ready function for 
invoking it in the package so it wouldn't be a one-liner, and also 
because I'm not sure if it should be a monadic coroutine of its own or a 
pure function.


The producer and consumer coroutines' steps should be running in 
parallel. I say should because I never got any decent parallel speedup 
with this scheme, but I'm not sure if the problem is in the library code 
or in the earlier versions of GHC.


One thing that's missing here is a way for the producer to run 
ahead of the consumer and produce more results that would be buffered. 
At the moment the producer blocks until the consumer is ready. I don't 
have a ready solution for this, but I'll give it some thought.





Here is a sketch of my program, in literate Haskell:


module Problem where
import Control.Monad ( forM_ )

The producer produces values. It blocks until there are now more
values to produce. Each value is given to a callback function.


type Producer a = (a -  IO ()) -  IO ()

The converter does some work with a value. This work is purely CPU and
it is the bottleneck of the program. The amount of work it has to do
is variable.


type Converter a b = a -  b

The consumer does something with the value calculated by the
converter. It is very important that the consumer consumes the values
in the same order as they are produced.


type Consumer b = b -  IO ()

Dummy producer, converter and consumer:


producer :: Producer Int
producer callback = forM_ [1..10] callback
converter :: Converter Int Int
converter = (*10)
consumer :: Consumer Int
consumer = print

A simple driver. Does not exploit concurrency.


simpleDriver :: Producer a -  Converter a b -  Consumer b -  IO ()
simpleDriver producer converter consumer = producer (consumer . converter)
current_situation :: IO ()
current_situation = simpleDriver producer converter consumer

Ideally I would like a driver that spawns a worker thread for each
core in my system. But the trick is in ensuring that the consumer is
offered results in the same order as they are generated by the
producer.

I can envision that some kind of storage is necessary to keep track of
results which can not yet be offered to the consumer because it is
still waiting for an earlier result.

Is there any package on Haskell that can help me with this problem? Or
do I have to implement it using lower level concurrency primitives?

Regards,
Roel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] TCP Server

2012-01-30 Thread Mario Blažević

On 12-01-28 06:56 AM, Felipe Almeida Lessa wrote:

On Sat, Jan 28, 2012 at 9:40 AM, Yves Parèsyves.pa...@gmail.com  wrote:

I think there is still no consensus on which iteratee library is the one
to use. There are at least iteratee, enumerator, iterIO, conduit, and
pipes. The reusability of your libary depends on the choice of
iteratee-style library you select.


Yes, and IMO this is a growing problem. Since iteratees were designed, a lot
of different libraries providing this kind of service have appeared.
Of course they all have advantages and inconvenients, but some libraries
that could be compatible are not, because they rely on a different
iteratee-ish package. For instance pipes (as its documentation states) is
really like iteratee... but with more concrete names. Still it's sufficient
to break compatibility.


In principle it's possible to have some code that converts functions
between these different iteratee packages -- at least between
iteratee, enumerator and iterIO since these seem to have more or less
the same implementation ideas.



	It's not only possible, it's done. The coroutine-enumerator and 
coroutine-iteratee packages convert to and from the enumerator and 
iteratee packages, using monad-coroutine as a bridge. The conversions 
are bare-bone, and I don't know if anybody has ever used them in 
practice. They still prove the concept.



 Converting from pipes may be possible, but to pipes seems pretty
 difficult since pipes sweeps IO under the rug.

	A Pipe appears to be just a specialized Coroutine from the 
monad-coroutine package with its type arguments expanded, so it would 
also be convertible.


--
Mario Blazevic
mblaze...@stilo.com
Stilo International

This message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure, copying, or
distribution is strictly prohibited. If you are not the intended
recipient(s) please contact the sender by reply email and destroy
all copies of the original message and any attachments.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The maximum/minimum asymmetry

2011-09-05 Thread Mario Blažević

On 11-09-05 10:42 AM, Sjoerd Visscher wrote:

This way these laws hold for non-empty lists:

maximumBy f xs = last (sortBy f xs)
minimumBy f xs = head (sortBy f xs)


That's not a bad justification for the way implementation works, 
even if it's not the original reason behind it. I think these laws 
should be added to the Haddock documentation.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] The maximum/minimum asymmetry

2011-09-04 Thread Mario Blažević
I was recently surprised to discover that the maximum and maximumBy 
functions always return the *last* maximum, while minimum and minimumBy 
return the *first* minimum in the list. The following GHCi session 
demonstrates this:


$ ghci
GHCi, version 7.2.1: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Prelude :module +Data.List Data.Ord
Prelude Data.List Data.Ord let list = [(1, 'B'), (1, 'A')]
Prelude Data.List Data.Ord maximumBy (comparing fst) list
(1,'A')
Prelude Data.List Data.Ord minimumBy (comparing fst) list
(1,'B')

I would normally consider this kind of gratuitous asymmetry a bug, 
but seeing that these functions' implementations have been specified in 
the Haskell 98 Library Report, I guess they are now a permanent feature 
of the language. Can anybody explain the reason for this behaviour?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: job position

2011-09-02 Thread Mario Blažević
	For anybody interested in programming languages or markup languages, 
there is an open job position for a junior developer at Stilo 
(http://www.stilo.com/). Haskell is currently used only for prototyping, 
but there are plans to begin some major development in Haskell within a 
year.


The job ad is attached.



Job Vacancy.docx
Description: application/vnd.openxmlformats-officedocument.wordprocessingml.document
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Flow Programming in FP

2011-06-21 Thread Mario Blažević

On 11-06-20 10:45 AM, Richard Senington wrote:

Hi all,

I have recently become interested in Dataflow programming and how it 
related to functional languages.
I am wondering if the community has any advice on reading matter or 
other directions to look at.


So far I have been looking through the FRP libraries, using Haskell 
functions with lazy lists for co-routines and
the Essence of Dataflow Programming by Uustalu and Vene where they 
propose using co-monads.


It looks as though Iteratees are also relevant but I have not got 
round to looking at them in detail yet.


Have I missed anything?


You may also be interested in the SCC package, based on the 
lower-level monad-coroutine package. It's more in the Flow-Based 
Programming tradition than in the Dataflow one, so I'm not sure it fits 
what you're looking for.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] question about interruptable state runner

2011-06-13 Thread Mario Blažević

On 11-06-09 04:14 PM, Alexander V Vershilov wrote:

Hello.

I'm writing a small tcp server with that can handle connections and
answer by rules writen in a small script that can be interpreted by server.
For this purpose I've written an interpreter that has type

   ErrorT MyError (StateT ScriptState IO)

so I can call native IO function in that script, and define new one.
   I can run this script with runState (runErrorT (...)) oldState.

But there is one problem: in script i should be able to call functions that
will stop script interpretation and wait for some server event. To continue
interpretation.

Can smb give an advice what is the best way to do it?



	It appears you've already settled on operational, and that's a good 
choice. I just wanted to point out the monad-coroutine as a possible 
alternative. The two packages have lots of similarities, the main 
difference appears to be in the way the monad suspension is seen: Prompt 
vs. Functor.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Attoparsec concatenating combinator

2011-06-03 Thread Mario Blažević

On 11-06-03 06:00 AM, Yitzchak Gale wrote:

Mario Blažević wrote:

  I don't know if this helps, but the incremental-parser library has
exactly the combinator you're looking for.


Wow, that is a beautiful implementation of a general parser
library. So much simpler than Parsec. Thanks for pointing it out.


Thanks. I guess I should get to work fixing its deficiencies then.



Why are you hiding those nice Monoid classes in the parser
package? Shouldn't it be a separate package?


	I considered it, and I'll do it if there's interest, but for the first 
release I decided to keep them close to where they're needed. There was 
less work that way. I'd hate to release a standalone package with only 
half the instance implementations.




Edward Kmett has also been adding some nice Monoid
abstractions lately. I haven't been following all of it. I wonder
how yours and his relate.


	If you mean semigroups, they are related but only through Monoid. 
Semigroup is a superclass of Monoid, whereas CancellativeMonoid and 
FunctorialMonoid are its subclasses. CancellativeMonoid is lying between 
a Monoid and Group in power. It would make more sense to merge my 
classes into some group package, though I don't see any obvious 
candidate on Hackage right now.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Attoparsec concatenating combinator

2011-06-02 Thread Mario Blažević
On Thu, Jun 2, 2011 at 10:02 AM, Yitzchak Gale g...@sefer.org wrote:
 I often find while using attoparsec and attoparsec-text that I need to
 match a number of text parsers consecutively and concatenate the
 result. By text parser I mean Parser ByteString for attoparsec and
 Parser Text for attoparsec-text.

   I don't know if this helps, but the incremental-parser library has
exactly the combinator you're looking for.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: quickcheck-properties

2011-05-31 Thread Mario Blažević

On 11-05-30 05:05 AM, Alexey Khudyakov wrote:

On 30.05.2011 12:26, Bas van Dijk wrote:

On 30 May 2011 00:14, Alexey Khudyakovalexey.sklad...@gmail.com wrote:

It always puzzled me why there are no packages for for testing general
type classes laws. (Monoid laws, monad laws etc). It looks like ideal
case for quickcheck and smallcheck.


How about 'checkers' by Conal Elliott:
http://hackage.haskell.org/package/checkers


We really need better search on hackage than C-f in browser. I didn't
find them. Thank you for pointers.


	When I needed the very same thing a few months ago, I discovered 
checkers by using the reverse dependencies list for QuickCheck:



http://bifunctor.homelinux.net/~roel/cgi-bin/hackage-scripts/revdeps/QuickCheck-2.4.1.1#direct

	That helped a lot, though finding checkers in the list still wasn't a 
breeze.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Policy for taking over a package on Hackage

2011-05-27 Thread Mario Blažević
On 11-05-25 08:52 AM, Johan Tibell wrote:
 On Wed, May 25, 2011 at 2:01 PM, Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com  wrote:
 With my wl-pprint-text package, Jason Dagit suggested to me on
 #haskell that it would make sense to make such a pretty-printer be
 class-based so that the same API could be used for String, ByteString,
 Text, etc.
 I'm a bit skeptical of using type classes to abstract over Unicode
 string types and byte sequence types. The only API shared by the two
 kind of types is that of a sequence. Things like dot , spaces, etc.
 don't make much sense on binary data. You must assume that the
 ByteString contains text in some encoding to make sense of such
 concepts.

You don't necessarily need spaces and dot to abstract over
bytestrings and Unicode string types. They both implement IsString and
Monoid classes, as well as operations null, length, isPrefixOf, take,
and drop.

That's sufficient to implement a generic parser that works on any
input type supporting these operations (see [1] for example), though
there is some performance cost. I don't see why a pretty-printer
should be any more difficult.

[1] http://hackage.haskell.org/package/incremental-parser

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: iterIO-0.1 - iteratee-based IO with pipe operators

2011-05-06 Thread Mario Blažević

On 11-05-06 11:15 AM, Alex Mason wrote:

Hi All,

I really love the look of this package, but if this is going be *the* iteratee 
package, I would absolutely love to see it fix some of the biggest mistakes in 
the other iteratee packages, soecifically naming. A change in naming for the 
terms iteratee, enumerator and enumeratee would go a hell of a long way here; 
Peaker on #haskell suggested Consumer/Producer/Transformer, and there is a lot 
of agreement in the channel that these are vastly better names. They’re also 
far less intimidating to users.

I personally feel that maybe Transformer isn't such a great name (being closely 
associated with monad transformers), and that maybe something like Mapper would 
be better, but I'm by no means in love with that name either. More people in 
#haskell seem to like Transformer, and I don't think my argument against it is 
very strong, so the hivemind seems to have settled on the 
Producer/Transformer/Consumer trilogy.

I'd love to hear thoughts on the issue, especially from David.


The Producer/Consumer terminology, if I'm not mistaken, is usually 
applied to coroutine pairs. I use these terms myself in the SCC package, 
together with terms Transducer and Splitter. The former term is also 
well established, the latter was my own.


Though I like and use this terminology, I'm not sure it's a good 
fit for the existing Enumerator/Iteratee pairs, which are not real 
symmetric coroutines. Enumerators are more like the Python (2.5) 
Generators. I don't know what the Python terminology would be for the 
Iteratee.



On 11-05-06 12:47 PM, dm-list-haskell-c...@scs.stanford.edu wrote:

This is a question I struggled a lot with.  I definitely agree that
the terms are pretty intimidating to new users.

At least one thing I've concluded is that it really should be
presented as two concepts, rather than three.  So we should talk
about, e.g., producers, consumers, and pipeline stages that do both.

I'd been thinking about using the terms Source and Sink, but Source is
very overloaded, and SinkSource doesn't exactly roll off the tongue
or evoke a particularly helpful intuition.


The SCC package happens to use Source and Sink names as well. They 
are used not for coroutines directly, but instead for references to 
coroutines of the appropriate type. Every consumer thus owns a Source 
from which it fetches its input, and that Source is always bound to 
another coroutine that yields those values through a Sink. Source and 
Sink are a passive handle to a Producer and Consumer. I may be 
subjective, but I find this use of the terms very fitting.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: iterIO-0.1 - iteratee-based IO with pipe operators

2011-05-06 Thread Mario Blažević

On 11-05-06 09:58 PM, dm-list-haskell-c...@scs.stanford.edu wrote:

At Fri, 06 May 2011 21:27:21 -0400,
Mario Blažević wrote:

I'd been thinking about using the terms Source and Sink, but Source is
very overloaded, and SinkSource doesn't exactly roll off the tongue
or evoke a particularly helpful intuition.

  The SCC package happens to use Source and Sink names as well. They
are used not for coroutines directly, but instead for references to
coroutines of the appropriate type. Every consumer thus owns a Source
from which it fetches its input, and that Source is always bound to
another coroutine that yields those values through a Sink. Source and
Sink are a passive handle to a Producer and Consumer. I may be
subjective, but I find this use of the terms very fitting.

You mean fitting for references to coroutines, or fitting for the
replacement names for Enumerator/Iteratee?


The former, unfortunately. As I said, the most usual name for the 
Enumerator concept would be Generator. That term is already used in 
several languages to signify this kind of restricted coroutine. I'm not 
aware of any good alternative naming for Iteratee.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ArrowLoop and streamprocessors

2011-03-31 Thread Mario Blažević

On 11-03-30 05:29 PM, Mathijs Kwik wrote:

Hi all,

I'm playing around a bit with arrows (more specifically, something
like a CPS style streamprocessor as described in Generalising Monads
to Arrows by John Hughes).


	I had struggled with the same problem a year ago, and I concluded it 
was hopeless. See


http://www.haskell.org/pipermail/haskell-cafe/2010-January/072193.html

	The only stream processors that conform to the Arrow interface seem to 
be the FRP-like ones, where each stream item comes with a time stamp. 
I'm not entirely convinced that Arrow is the best abstraction even in 
that case.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: version 0.7 of monad-parallel, monad-coroutine and SCC

2011-03-26 Thread Mario Blažević

On 11-03-26 12:27 AM, Steven Shaw wrote:

Hi Mario,

I wondered if you had an application in mind for your incremental 
parser library in Haskell? A little while ago I was following the 
development of an open source text editor for Mac OS X called 
Kod[.app]. They were wanting an incremental parser to help with 
correct+fast syntax highlighting and the like. Looks like they decided 
on gazelle written in C. I though you might find it of interest.


https://groups.google.com/forum/#!msg/gazelle-users/RfE-lSmqb7c/vrqdPaOIoMwJ 
https://groups.google.com/forum/#%21msg/gazelle-users/RfE-lSmqb7c/vrqdPaOIoMwJ

http://www.reverberate.org/gazelle/

Is that the kind of thing you had in mind for your incremental parser 
library in Haskell?


The application I had in mind for the incremental parser is already 
using it, it's the new version of the streaming component combinators 
framework. More generally, though, its main purpose is the efficient 
communication between coroutine-like things like enumerators and 
iteratees. They tend to produce and consume data in chunks, but the 
producer's idea of proper chunk boundaries often doesn't match the 
consumer's. So instead of fetching raw chunks, the consumer uses 
incremental parser to abstract the producer's boundaries away.


I did not have text editors or parsing big languages in mind when 
I wrote the library, but I suppose it could be used there as well. Among 
my vague plans were to provide a bridge for the enumerator and coroutine 
libraries, and to write a proper incremental XML parser.


Thanks for the links, I wasn't aware of these projects.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: enumerator 0.4.8

2011-03-26 Thread Mario Blažević

On 11-03-26 04:33 PM, John A. De Goes wrote:


I noticed this problem some time ago. Beyond just breaking monadic 
associativity, there are many other issues with standard definitions 
of iteratees:


1. It does not make sense in general to bind with an iteratee that
has already consumed input, but there's no type-level difference
between a virgin iteratee and one that has already consumed input;

2. Error recovery is ill-defined because errors do not describe
what portion of the input they have already consumed;

3. Iteratees sometimes need to manage resources, but they're not
designed to do so which leads to hideous workarounds;

4. Iteratees cannot incrementally produce output, it's all or
nothing, which makes them terrible for many real world problems
that require both incremental input and incremental output.


Overall, I regard iteratees as only a partial success. They're leaky 
and somewhat unsafe abstractions.


Out of curiosity, have you looked at the monad-coroutine library? 
It's a more generic and IMO much cleaner model, though I wouldn't 
recommend it as a replacement because the enumerator and iteratee 
libraries come with more predefined plumbing. I think your point #1 
still stands, but others can all be made to disappear - as long as you 
define your suspension functors properly.



I'm experimenting with Mealy machines because I think they have more 
long-term promise to solve the problems of iteratees.


Do you mean a sort of a transducer monad transformer or an actual 
finite state machine? The latter would seem rather restrictive.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: version 0.7 of monad-parallel, monad-coroutine and SCC

2011-03-25 Thread Mario Blažević
Packages monad-parallel [1], monad-coroutine [2] and SCC [3] have been 
upgraded on Hackage to version 0.7.


The monad-parallel library defines two Monad subclasses, 
MonadParallel and MonadFork, that enable some monadic computations to be 
executed in parallel and their results combined. The library also 
exports a subset of the Control.Monad interface (ap, sequence, and 
related functions), adjusted to exploit the parallelism. The new version 
of the library adds several more instances of these classes.


The monad-coroutine library exports a generic monad transformer 
Coroutine: Functor s = MonadTrans (Coroutine s). A 
Coroutine-transformed monad can suspend at any point, returning its 
resumption wrapped in the functor s.


The new release of monad-coroutine makes Coroutine an instance of 
Functor and Applicative classes. It also adds support for 
inter-coroutine communication through chunks parsed by the 
incremental-parser library [4].


The new version of Streaming Component Combinators (a.k.a. SCC) 
relies the aforementioned incremental-parser support to some good 
effect. Many coroutine components have been simplified and sped up by 
~30% on average.


[1] http://hackage.haskell.org/package/monad-parallel-0.7
[2] http://hackage.haskell.org/package/monad-coroutine-0.7
[3] http://hackage.haskell.org/package/scc-0.7
[4] http://hackage.haskell.org/package/incremental-parser-0.1


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mime / Mail library

2011-03-22 Thread Mario Blažević
On Sun, Mar 20, 2011 at 10:50 AM, Christopher Done chrisd...@googlemail.com
 wrote:

 On 20 March 2011 15:05, Pieter Laeremans pie...@laeremans.org wrote:

 Hi all,

 The MIME package that can be found on hackage, uses String as input.
 Would i be considered better if there would be a version based on Text, or
 ByteString ?


 I think the solution to this problem is a generic `string' package which
 just provides a few classes. The MIME library would export an interface that
 only deals with instances of these classes, and whether you're using Text,
 String, ByteString/Lazy/Char8, ropes, whatever, it's not the library
 writer's concern or assumptions to make.

 We already have:
 http://hackage.haskell.org/packages/archive/string-combinators/0.6/doc/html/Data-String-Combinators.html
 http://hackage.haskell.org/packages/archive/string-combinators/0.6/doc/html/Data-String-Combinators.html
 Which works on Monoid and IsString, but there needs to be a class like can
 be read/outputted via IO and one for read/show/serialize, both of which are
 important for speed.



One possible extension to the Data-String-Combinators approach can be found
in my new incremental-parser package (
http://hackage.haskell.org/package/incremental-parser-0.1), which I should
soon announce. It relies on three Data.Monoid subclasses, all plain Haskell
98, that allow monoids to be decomposed and parsed. There are instances for
lists, ByteString, and Text. It's a different approach from ListLike,
because it abstracts away the Char type.

Like with any abstraction, though, there is performance cost for some
operations. It could be minimized by adding more defaulted methods to the
FactorialMonoid class, but for the first release I concentrated on what the
parser needed.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: incremental-parser library package

2011-03-22 Thread Mario Blažević
The first version of incremental-parser has been released on Hackage
[1]. It's yet another parser combinator
library, providing the usual set of Applicative and Monad combinators. Apart
from this, it has three twists that make it
unique.

First, the parser is incremental. That means it can be fed its input in
chunks, and in proper circumstances it can
also provide the parsed output in chunks. For this to be possible the result
type must be a Monoid. The complete parsing
result is then a concatenation of the partial results.

In order to make the incremental parsing easier, the combinator set is
optimized for monoidal results. The usual
combinator many1, for example, assumes the result type is a monoid and
concatenates its components instead of
constructing a list.

In Parsec:
 many1 :: Stream s m t = ParsecT s u m a - ParsecT s u m [a]

In incremental-parser:
 many1 :: (Monoid s, Monoid r) = Parser s r - Parser s r


The second weirdness is that the the parser is generic in its input
stream type, but this type is parameterized in a
holistic way. There is no separate token type. Primitive parsers that need
to peek into the input require its type to be
an instance of a monoid subclass.

In Parsec:
 string :: Stream s m Char = String - ParsecT s u m String
 char :: Stream s m Char = Char - ParsecT s u m Char
 anyToken :: (Stream s m t, Show t) = ParsecT s u m t

In Attoparsec:
 string :: ByteString - Parser ByteString
 word8 :: Word8 - Parser Word8
 anyWord8 :: Parser Word8

In incremental-parser:
 string :: (LeftCancellativeMonoid s, MonoidNull s) = s - Parser s s
 token :: (Eq s, FactorialMonoid s) = s - Parser s s
 anyToken :: FactorialMonoid s = Parser s s

The monoid subclasses referenced above provide methods for analyzing and
subdividing the input stream. The classes
are not particularly demanding, and any reasonable input stream should be
able to accommodate them easily. The library
comes with instances for lists, ByteString, and Text.

 class Monoid m = MonoidNull m where
mnull :: m - Bool

 class Monoid m = LeftCancellativeMonoid m where
mstripPrefix :: m - m - Maybe m

 class Monoid m = FactorialMonoid m where
factors :: m - [m]
primePrefix :: m - m
...


Finally, the library being implemented on the basis of Brzozowski
derivatives, it can provide both the symmetric and
the left-biased choice, | and |. This is the same design choice made by
Text.ParserCombinators.ReadP and
uu-parsinglib. Parsec and its progeny on the other hand provide only the
faster left-biased choice, at some cost to the
expressiveness of the combinator language.

[1] http://hackage.haskell.org/package/incremental-parser-0.1
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: incremental-parser library package

2011-03-22 Thread Mario Blažević
 This seems very interesting. One question:

  The MonadPlus and the Alternative instance differ: the former's mplus
  combinator equals the asymmetric | choice.

 Why?



Good question. Basically, I see MonadPlus as a union of Monad and
Alternative. The class should not exist at all. But as long as it does, I
figured I should provide an instance, and I made it different from the
Monoid+Alternative combination because otherwise it would be useless. My
second choice would be to remove the instance completely.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] ANNOUNCE: incremental-parser library package

2011-03-22 Thread Mario Blažević
2011/3/22 Philippa Cowderoy postmas...@flippac.org

 This is what newtypes are for, no?



I did not think of that approach. I'm not sure how well it would work out,
but it would solve another problem I have, which is the duplication of
combinators many, some, and optional. Each of these could exist in two
forms, the lazy one and the greedy one, and the only difference is the
underlying choice combinator, (|) vs. (|).

I'm not aware of any other parsing library taking this road, though, and
there must be a good reason. I'll try and see.



 2011/3/22 Mario Blažević mblaze...@stilo.com


  This seems very interesting. One question:

  The MonadPlus and the Alternative instance differ: the former's mplus
  combinator equals the asymmetric | choice.

 Why?



 Good question. Basically, I see MonadPlus as a union of Monad and
 Alternative. The class should not exist at all. But as long as it does, I
 figured I should provide an instance, and I made it different from the
 Monoid+Alternative combination because otherwise it would be useless. My
 second choice would be to remove the instance completely.


 I have to admit I really do not like having Applicative and MonadPlus with
 different behavior. Yes, one is redundant, but that is more an artifact of
 language evolution, than an intentional opportunity for diverging behavior.

 Every library I am aware of to date, save of course this one, has
 maintained their compatibility.

 If the instance for Alternative satisfies the underspecified MonadPlus
 laws, I'd just as soon have the 'useless redundant' instance. The power of
 MonadPlus is in the combinators that are built on top of it. Not in the
 primitives themselves.

 If the Alternative instance would not be a legal MonadPlus instance, then
 I'd feel much less queasy with your second scenario, and it simply removed.

 -Edward

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] trac.haskell.org problem

2011-03-18 Thread Mario Blažević
I've tried to send the verification e-mail from trac.haskell.org to three
different e-mail accounts in the last 24 hours. None arrived.

Note that I'm not trying to create a new account, I'm trying to verify my
e-mail address in order to edit the Wiki pages of my projects.

Can anybody else confirm this problem?



 Ah, I suppose that's to do with the trac server itself, that's beyond me.

 Moving to café: Does anyone know what's up with trac.haskell.org not
 sending
 out verification emails?

 Cheers,

 /Niklas

 
 From: Jurriaan Hage [jur at cs.uu.nl]
 Sent: Monday, February 28, 2011 8:41 AM
 To: Niklas Broberg
 Subject: Re: Again a question about haskell-src-exts.

 On Feb 28, 2011, at 8:39 AM, Niklas Broberg wrote:

  I suppose I do - to the extend I can. What's the issue? :-)
 The system does not send out verification e-mails, so I can not really
 register completely and submit new issues.
 I have tried it with two different e-mail accounts, and somebody else has
 tried too.
 They also do not show up in spam folders.

 Jur

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Missing e-mail @haskell-cafe

2011-03-18 Thread Mario Blažević
I've sent an e-mail to Haskell Café this morning about my troubles 
with Trac confirmation e-mails. The e-mail must have reached the server, 
because it showed up in the mailing list archive:


http://www.haskell.org/pipermail/haskell-cafe/2011-March/090311.html

This other archive, however, doesn't list it:

http://www.mail-archive.com/haskell-cafe@haskell.org/

I thereby infer that the message was never forwarded to the mailing 
list. Is that correct, and does anybody have a clue why?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Co/Contra)Functor and Comonad

2010-12-24 Thread Mario Blažević
On Fri, Dec 24, 2010 at 5:36 AM, Edward Kmett ekm...@gmail.com wrote:

 +1 for adding a Contrafunctor/ContraFunctor to base somewhere. But I agree
 completely with Tony, please call it contramap. ;) Otherwise people will
 wonder why comonads are not cofunctors -- a matter which can be cleared up
 by avoiding sloppy terminology.

 +1 for adding Comonads. As an aside, since Haskell doesn't have (nor could
 it have) coexponential objects, there is no 'missing' Coapplicative concept
 that goes with it, so there can be no objection on the grounds of lack of
 symmetry even if the Functor = Applicative = Monad proposal goes through.

 I have been meaning to split off a 'comonads' package from category-extras
 for a while, in a way that avoids requiring tons of crazy machinery. I have
 a candidate that I just need to polish up a bit and throw on hackage --
 perhaps that could serve as a straw man proposal?



Yes, please. The interface of Comonad is big enough to require some proper
design, and an exclusive comonads package would be a good place for refining
it.

The same arguments can be made for ContraFunctor, though in this case the
only open questions are only the naming of the module, class and its single
method, and what instances to declare inside the module.




 -Edward


 On Fri, Dec 24, 2010 at 4:51 AM, Stephen Tetley 
 stephen.tet...@gmail.comwrote:

 On 24 December 2010 02:16, Mario Blažević mblaze...@stilo.com wrote:

  To turn the proof obligation around, what could possibly be the downside
 of
  adding a puny Cofunctor class to the base library?

 Hi Mario

 For the record I'm personally neutral on Cofunctor and on balance
 would like to see Comonad added to Base.

 My reservation is really at the meta-level - I suspect there are a
 lot of candidates for adding to Base if you want to Base to be
 systematic about modeling structures. At the moment and possibly by
 accident rather than explicit intention, the structures in Base
 (Monoid, Applicative, Monad, Arrow) add good sets of operational
 combinators as well as modeling structures (in Monoid's case it only
 adds one operational combinator but it is the basis for Foldable, the
 Writer Monad and more).

 For Comonad, Cofunctor (Bifunctor, Semigroup...) not having the
 visibility of being in Base certainly means there is less motivation
 to discover valuable operations that use them, but should they go into
 Base without an initial strong operational value, instead maybe
 something between Base and Hackage is needed?

 Certainly, Hackage isn't great for developing Base candidates. The
 bike shedding on the Libraries list, whilst frustrating for a
 proposer, is valuable for teasing out more regular designs than single
 authored packages often manage, and having lots of small packages for
 Base-like things is a dependency burden that hinders adoption.

 Best wishes

 Stephen

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Co/Contra)Functor and Comonad

2010-12-24 Thread Mario Blažević
On Fri, Dec 24, 2010 at 7:43 AM, Maciej Piechotka uzytkown...@gmail.comwrote:

 On Fri, 2010-12-24 at 05:36 -0500, Edward Kmett wrote:
 
  +1 for adding Comonads. As an aside, since Haskell doesn't have (nor
  could it have) coexponential objects, there is no 'missing'
  Coapplicative concept that goes with it, so there can be no objection
  on the grounds of lack of symmetry even if the Functor = Applicative
  = Monad proposal goes through.

 There is still potentially useful Copointed/CoPointed:

 class [Functor a =] CoPointed a where
copoint :: f a - a



Why should Copointed, or Pointed for that matter, be a subclass of Functor?
I don't see the point of arranging all possible classes into a single
complete hierarchy. These single-method classes can stand on their own. Once
you have them, it's easy to declare

 class (Functor f, Pointed f) = Applicative f

and also

 class (Foldable f, Pointed f) = Sequence f

or whatever.


On Fri, Dec 24, 2010 at 4:51 AM, Stephen Tetley stephen.tet...@gmail.comwrote:

 On 24 December 2010 02:16, Mario Blažević mblaze...@stilo.com wrote:

  To turn the proof obligation around, what could possibly be the downside
 of
  adding a puny Cofunctor class to the base library?

 Hi Mario

 For the record I'm personally neutral on Cofunctor and on balance
 would like to see Comonad added to Base.

 My reservation is really at the meta-level - I suspect there are a
 lot of candidates for adding to Base if you want to Base to be
 systematic about modeling structures.



There is a limited number of methods with up to N unconstrained arguments,
combinatorics takes care of that.

class Foo (x :: *) where
 method1 :: x-- default, mempty, minBound, maxBound
 method2 :: x - x -- succ, pred, negate
 method3 :: x - x - x  -- mappend
 method4 :: (x - x) - x-- fix

class Cons (c :: * - *) where
 method1 :: x - c x   -- return, pure
 method2 :: c x - x   -- extract
 method3 :: c (c x) - c x-- join
 method4 :: c x - c (c x)-- duplicate
 method5 :: c (c x) - x
 method6 :: x - c (c x)
 method7 :: x - c x - c x
 method8 :: c x - c x - x
 method9 :: (x - x) - c x - c x
 method10 :: (x - y) - c x - c y  -- fmap
 method11 :: (x - y) - c y - c x  -- contramap
 method12 :: x - c y - c y
 method13 :: x - c y - c x
 method14 :: c x - c y - x
 method15 :: c x - (x - c x) - c x
 method16 :: c x - (x - c y) - c y  -- =
 method17 :: c x - (c x - x) - c x
 method18 :: c x - (c x - y) - c y  -- extend


I may have left something out, but all types above should be inhabited. I
have omitted methods on constructors that can be defined on a plain type,
such as mplus :: m a - m a - m a, which is a restriction of the type of
mappend.

If one were to explore the design space systematically with no backward
compatibility baggage, the best approach might be:

- declare each method in a class of its own, with no laws whatsoever,
- never declare two methods in a same class,
- combine the primitive classes into bigger classes,
- restrict the bigger classes with laws.

The Pointed and Copointed classes above are two examples.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (Co/Contra)Functor and Comonad

2010-12-23 Thread Mario Blažević

Why are Cofunctor and Comonad classes not a part of the base library?

I recently defined a data type (Control.Cofunctor.Ticker in 
monad-coroutine on Hackage) that happens to be a co-functor, or 
contra-functor if you prefer. In other words, it can implement the 
following function:


 cofmap :: (a - b) - cf b - cf a

I wanted to define a proper instance for it, but to my surprise I 
discovered that I couldn't. Not only is the class not defined in base, 
the only package I could find on Hackage that defines it is 
category-extras. This is a huge package I'd rather not have as a 
dependency, so I opted not to declare any instance.


Later on I found that this question has been raised before by *Conal 
Elliott*, nearly four years ago.


http://www.haskell.org/pipermail/libraries/2007-January/006740.html

The result was the TypeCompose package, which presents a decent 
solution. I still can't think of any harm in having a proper class 
declaration in the base library: if you don't need it, you don't need to 
know it. But without the class declaration in the base library (or in 
some other obvious package) every other released library must either 
lack the instance declarations or declare the class itself and risk clashes.


This is not some obscure class you've never encountered, by the way: any 
consumer of data is a cofunctor. All regexp data types, for example, 
are cofunctor instances - or would be if there was a class to declare 
them instances of.


The same question extends to Comonad. I am not too familiar with this 
area so there may be fewer potential instances of the class, but I 
remember there was quite a bit of buzz around comonads a few years ago. 
Judging by a cursory Hayoo search, if anything the situation is worse 
than with Cofunctor: there are multiple incompatible declarations of the 
class scattered in various libraries that need it: ad, category-extras, 
data-category, rope. There is also evidence of undeclared Comonad 
instances: see the ListZipper and value-supply libraries.


So, is it time to add these two classes to the base library?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Co/Contra)Functor and Comonad

2010-12-23 Thread Mario Blažević
On Thu, Dec 23, 2010 at 5:25 PM, Stephen Tetley stephen.tet...@gmail.comwrote:

 On 23 December 2010 21:43, Mario Blažević mblaze...@stilo.com wrote:
  Why are Cofunctor and Comonad classes not a part of the base library?
 [SNIP]
  Later on I found that this question has been raised before by Conal
 Elliott,
  nearly four years ago.
 
  http://www.haskell.org/pipermail/libraries/2007-January/006740.html


 From a somewhat philistine persepective, that Conal's question went
 unanswered says something:

 Does anyone have useful functionality to go into a Cofunctor module
 (beyond the class declaration)?

 Successful post-H98 additions to Base (Applicative, Arrows, ...)
 brought a compelling programming style with them. For Comonads,
 Category-extras does define some extra combinators but otherwise they
 have perhaps seemed uncompelling.



There are plenty of potential Cofunctor instances on Hackage, as I've
pointed out. The other side of the proof of the utility of the class would
be to find existing libraries that could be parameterized by an arbitrary
functor: in other words, some examples in Hackage of

 class Cofunctor c = ...
 instance Cofunctor c = ...
 f :: Cofunctor c = ...

This would be rather difficult to prove - such signatures cannot be declared
today, and deciding if existing declarations could be generalized in this
way would require a pretty deep analysis. The only thing I can say is build
it and they will come.

To turn the proof obligation around, what could possibly be the downside of
adding a puny Cofunctor class to the base library?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Co/Contra)Functor and Comonad

2010-12-23 Thread Mario Blažević
On Thu, Dec 23, 2010 at 11:25 PM, Tony Morris tonymor...@gmail.com wrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 ...regardless of the utility of a contravariant functor type-class, I
 strongly advocate for calling it Contrafunctor and not Cofunctor. I
 have seen numerous examples of confusion over this, particularly in
 other languages.



I don't personally care what's it called, as long as it's available. Can
anybody point to an authoritative source for the terminology, though?
Wikipedia claims that cofunctor is a contravariant functor.

Also, is there anything in category theory equivalent to the Functor -
Applicative - Monad hierarchy , but with a Cofunctor/Contrafunctor at the
base? I'm just curious, I'm not advocating adding the entire hierarchy to
the base library. ;)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Concurrent with monads

2010-11-15 Thread Mario Blažević

On 10-11-14 07:36 PM, Jiansen He wrote:

Hi cafe,

I wounder if it is possible to tell a haskell system that two
computations with side effects could be executed concurrently.

Here is an affected example:

Suppose two people want to compare their age, but do not want to leak
their personal information.  The following program reads one person's
age after another then send back compared result.

age :: IO ()
age  = do
   i - readIntFrom a
   j - readIntFrom b
  writeTo a (i-j)
  writeTo b (j-i)

How can I express the fact that two readings could be carried out in any
order?


You can use the monad-parallel package. Your example could be written as

 import qualified Monad.Parallel as MP

 age :: IO ()
 age  = do
   (i, j) - MP.liftM2 (,) (readIntFrom a) (readIntFrom b)
   writeTo a (i-j)
   writeTo b (j-i)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: version 0.6 of monad-coroutine and SCC

2010-11-04 Thread Mario Blažević
Packages monad-coroutine and SCC have been upgraded to version 0.6 on
Hackage.

The monad-coroutine package exports a generic monad transformer
Coroutine: Functor s = MonadTrans (Coroutine s). A
Coroutine-transformed monad can suspend at any point, returning its
resumption wrapped in the functor s. There are also
some functions for manipulating and running coroutines, as well as a couple
of useful suspension functors such as Yield
and Await. Version 0.6 of monad-coroutine makes running pairs of coroutines
simpler by adding several predefined
resolvers.

The new version of Streaming Component Combinators (a.k.a. SCC) is
mostly about performance enhancements. The
coroutines now communicate through chunks of data rather than individual
items, which reduces the coroutine switching
overhead. The module hierarchy has been reorganized for easier use, but the
exported type and function names have mostly remained the same.

Both packages have been tested with GHC 6.12.3 and 7.0.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: coroutine-enumerator

2010-11-04 Thread Mario Blažević
The newly released coroutine-enumerator package can be used as a bridge
between the enumerator and monad-coroutine packages. It provides two-way
conversion functions between an Iteratee and an Await-suspending coroutine,
and also between an Enumerator and a Yield-suspending coroutine.

As a little example, the following program combines the http-enumerator,
monad-coroutine, and SCC packages using the coroutine-enumerator package to
print out all lines from the Hackage database containing substring
enumerator:

 import Control.Exception.Base (SomeException)
 import Control.Monad.Trans.Class (lift)

 import Data.ByteString (ByteString)
 import Data.Text.Encoding (decodeUtf8)

 import Network.HTTP.Enumerator

 import Control.Monad.Coroutine
 import Control.Monad.Coroutine.SuspensionFunctors
 import Control.Monad.Coroutine.Nested
 import Control.Monad.Coroutine.Enumerator

 import Control.Concurrent.SCC.Sequential

 main = httpRedirect (\_ _- coroutineIteratee consumer) = parseUrl
address

 address = http://hackage.haskell.org/packages/archive/pkg-list.html;

 consumer :: Coroutine (Await [ByteString]) IO (Either SomeException ((),
[ByteString]))
 consumer = pipe translator (consume worker)  return (Right ((), []))

 translator :: Functor f = Sink IO (EitherFunctor (Await [a]) f) a -
Coroutine (EitherFunctor (Await [a]) f) IO ()
 translator sink = do chunks - liftParent await
  if null chunks
 then lift (putStrLn END)
 else putList chunks sink  translator sink

 worker :: Consumer IO ByteString ()
 worker = toChars - foreach (line `having` substring enumerator)
  (append (fromList \n) - toStdOut)
  suppress

 toChars :: Monad m = Transducer m ByteString Char
 toChars = oneToOneTransducer decodeUtf8 - coerce


Alternatively, the worker coroutine can parse the XML database and print out
all elements whose any attribute value contains the substring enumerator:

 worker = toChars
  - parseXMLTokens
  - foreach (xmlElementHavingTagWith (xmlAttributeValue `having`
substring enumerator)
   `nestedIn` xmlElementContent)
  (coerce - toStdOut)
  suppress
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Forall and type synonyms in GHC 7.0

2010-11-01 Thread Mario Blažević



I had the exact same problem in my regional-pointers package in the
withArray function:

withArray ∷ (Storable α, MonadCatchIO pr)
   ⇒ [α]
   → (∀ s. RegionalPtr α (RegionT s pr) → RegionT s pr β)
   → pr β

  I had to replace the original:

withArray vals = withArrayLen vals ∘ const

with:

withArray vals f = withArrayLen vals $ \_ → f

where:

withArrayLen ∷ (Storable α, MonadCatchIO pr)
 ⇒ [α]
 → (∀ s. Int → RegionalPtr α (RegionT s pr) → RegionT s pr β)
 → pr β

So unfortunately you gave to inline the function composition:

pair2 combinator = pair1 $ \b -  combinator (chooseBinder b)



	This worked for me, thank you! I was worried I'd have to make a 
sweeping change to the module interfaces. I find this solution rather 
surprising, but as long as it's localized I don't mind.




Note that in the other thread I'm describing a similar problem in my
usb-safe package. Where in essence the problem is that the following
won't type check:

foo :: (forall s. ST s a) -  a
foo st = ($) runST st

but the following will:

foo :: (forall s. ST s a) -  a
foo st = runST st

and surprisingly the following will also type check:

foo :: (forall s. ST s a) -  a
foo st = runST $ st



	Yes, I hadn't seen that thread until this morning. The same issue with 
impredicative types appears to cause my problem and both problems you've 
encountered. I wonder what percentage of Hackage libraries will be 
affected by the change.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Forall and type synonyms in GHC 7.0

2010-10-31 Thread Mario Blažević
Before uploading a new version of my project on Hackage, I decided to
future-proof it against GHC 7.0. I ran into several compile errors caused by
the changes in let generalization, but these were easy to fix by adding
extra type annotations. But then I ran into another problem that I can't fix
so easily. Here is its trimmed-down reproduction:

 {-# LANGUAGE RankNTypes #-}

 module Test where

 data Component c = Component {with :: c}

 pair1 :: (Bool - c1 - c2 - c3) - Component c1 - Component c2 -
Component c3
 pair1 combinator (Component c1) (Component c2) = Component (combinator
True c1 c2)

 type PairBinder m = forall x y r. (x - y - m r) - m x - m y - m r

 pair2 :: Monad m = (PairBinder m - c1 - c2 - c3) - Component c1 -
Component c2 - Component c3
 pair2 combinator = pair1 (combinator . chooseBinder)

 chooseBinder :: Monad m = Bool - PairBinder m
 chooseBinder right = if right then rightBinder else leftBinder

 leftBinder :: Monad m = PairBinder m
 leftBinder f mx my = do {x - mx; y - my; f x y}

 rightBinder :: Monad m = PairBinder m
 rightBinder f mx my = do {y - my; x - mx; f x y}

   The general idea here, if you're intrigued, is that pair1 belongs to a
generic module that packages things it knows nothing about into Components.
The remaining definitions belong to a client of the generic module, and
pair2 is a specialization of pair1 to components that have something to do
with monads.

   Now this little test compiles fine with GHC 6.12.1, but GHC
7.0.0.20101029 reports the following error in the pair2 definition:

TestForall.lhs:13:42:
Couldn't match expected type `forall x y r.
  (x - y - m r) - m x - m y - m r'
with actual type `(x - y - m1 r) - m1 x - m1 y - m1 r'
Expected type: Bool - PairBinder m
  Actual type: Bool - (x - y - m1 r) - m1 x - m1 y - m1 r
In the second argument of `(.)', namely `chooseBinder'
In the first argument of `pair1', namely
  `(combinator . chooseBinder)'

I've tried adding extra type annotations without making any progress. At
this point I'm beginning to suspect I ran into a bug in GHC 7.0, but I can't
find it in GHC Trac; the only ticket that looks similar is #4347, but that
one works for me. Is this a bug? If not, how do I make my code compile?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Continuations and coroutines

2010-06-24 Thread Mario Blažević

Yves Parès wrote:
It helps me understand better, but would you have some simple code that 
would do that ?


	You can look at the definition of the coroutine monad transformer in 
the monad-coroutine package as well:


   http://hackage.haskell.org/package/monad-coroutine

The heart of the library is in the data type


newtype Coroutine s m r = Coroutine {
   resume :: m (Either (s (Coroutine s m r)) r)
}


where s is an arbitrary functor (like Yield, for example), m is an 
arbitrary monad, and r is the coroutine's final result type.



	You can also read the Trampolined Style and The essence of 
multitasking papers:


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.5447rep=rep1type=pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.4514rep=rep1type=pdf





2010/6/19 Paul Johnson p...@cogito.org.uk mailto:p...@cogito.org.uk

On 19/06/10 10:36, Yves Parčs wrote:

Hello,

I saw on the haskell wikibook that coroutines could be
implemented by using continuations :

http://en.wikibooks.org/wiki/Haskell/Continuation_passing_style#Example:_coroutines
(unhappily, the section is empty)
Since I'm actually learning the wonders of continuations, I just
wonder : how ?
 



Coroutines depend on the ability to suspend and resume execution.  A
continuation acts as the resume point in the current function.
 The callCC function in the continuation monad takes a function
that expects the continuation as an argument (which is how you get
access to it).  So you say something like:

   yield = callCC $ \continuation - 

Then you would typically store the continuation somewhere and call
some other previously stored continuation to switch contexts.

Continuations can be used to pass data back into the continuation:
you call the continuation with an argument, and that argument
becomes the return value of the callCC.  In this case you probably
just want to use ().

You typically have a queue for continuations, so the new
continuation goes on the back of the queue and then you call the
head of the queue.  Obvious modifications for priority, simulated
time, real time or whatever else you are trying to schedule.  This
implies some kind of monadic state to store the queue in, so you
will probably make your monad of type ContT (State Queue)

If you want a thread to wait, say on a semaphore, then you have a
queue of continuations in the semaphore data structure.

Is this any help?

Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



--
Mario Blazevic
mblaze...@stilo.com
Stilo International

This message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure, copying, or
distribution is strictly prohibited. If you are not the intended
recipient(s) please contact the sender by reply email and destroy
all copies of the original message and any attachments.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [OT?] Haskell-inspired functions for BASH

2010-04-06 Thread Mario Blažević

Patrick LeBoutillier wrote:

...
Basically I'm looking for a bit of feedback/info:
- Does anyone know if there are already similar projets out there?


	You've already got some references, you can also look at 
http://hackage.haskell.org/package/scc which includes a shell language.




- Does anyone find this interesting?


I do.


- Any other comment/suggestion/feedback


	It would be interesting to compare the speed of your Bash functions 
with the h4sh executables written in Haskell. Your solution has the 
source platform independence on its side, of course.


	A question of my own: is there any written design (an academic paper 
would be perfect) of a functional shell language?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Three package announcements

2010-03-22 Thread Mario Blažević

There are three new packages on Hackage:
  - monad-parallel 0.5
(http://hackage.haskell.org/package/monad-parallel)
  - monad-coroutine 0.5
(http://hackage.haskell.org/package/monad-coroutine)
  - Streaming Component Combinators 0.5
(http://hackage.haskell.org/package/scc)

   The first two packages are completely new. Their functionality has
been present in SCC 0.4, but I thought they might be useful on their
own. I'm keeping all the version numbers in sync for now, and the source
code for all three packages is in a single Darcs repository at
http://code.haskell.org/SCC/.

   The monad-parallel library defines two Monad subclasses, 
MonadParallel and MonadFork, that enable monadic computations to be

executed in parallel and their results combined. The library also
exports a subset of the Control.Monad interface (ap, sequence, and
related functions), adjusted to exploit the parallelism. The only 
currently defined MonadParallel instances are IO, Maybe, [], and 
Identity. More  instances could be added, but I didn't want the package 
to depend on MTL or transformers. The library design was heavily 
influenced by the discussion in this Cafe thread:


http://www.mail-archive.com/haskell-cafe@haskell.org/msg68581.html

   The monad-coroutine package exports a generic monad transformer
Coroutine: Functor s = MonadTrans (Coroutine s). A 
Coroutine-transformed monad can suspend at any point, returning its

resumption wrapped in the functor s. There are also some functions for
manipulating and running coroutines, as well as a couple of useful
suspension functors such as Yield and Await.

   Finally, version 0.5 of Streaming Component Combinators (a.k.a. SCC)
comes with some significant code refactoring (as the two aforementioned
packages prove), simplifications, and performance enhancements. No new
features have been added since the 0.4 release.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GUI programming

2010-02-05 Thread Mario Blažević

Victor Nazarov wrote:

Hello,

I've been writing some GUI application with Gtk2hs. It's an
interpreter for lambda-calculus and combinatory logic, it's GPL and if
you interested I can share it with cafe.

The problem is that the GUI code has become very ugly and I'm tempted
to rewrite it totally. I've been looking forward to the FRP stuff, but
I've never seen a single definition of the term. Conal Eliot's
denotational programming is too general to be definition. I want to
try Grapefruit, but I got totally lost when I see arrow notation.

I consider more lightweight and more imperative approach, something
closer to CSP (Communicating Secuential Processes) then FRP. I've just
crafted some sample program to illustrate my idea.

The behaviour is a monad and it's IO monad so you can do any IO
(Gtk2hs) programming you wish. The differences is that you don't
attach static event handlers and tries to determine what to do
dependent on application state. You attach and detach handlers as much
as possible. Behaviour looks like a process that can stop execution
and wait for some GUI event. When event arrived it continues
execution.


	To summarize, the behaviour is a suspendable IO computation. It looks 
very much like a coroutine, in fact. I'm planning to extract the 
Control.Concurrent.Coroutine module [1] into a separate package soon. It 
implements a similar concept but generalized to transform any monad and 
any functorial suspension.


[1] 
http://hackage.haskell.org/packages/archive/scc/0.4/doc/html/Control-Concurrent-Coroutine.html




Do you see this approach viable. There are steel some details to emerge:
* How to wait for several events
* How to handle IO exceptions


	I don't really know how applicable the idea is to GUI programming. 
That's not my area of expertise. I am surprised, though, that neither 
your code not your comments seem to address the issue of concurrency, as 
I expect that would be crucial in a GUI setting. Wouldn't you need 
different behaviours to run in different threads?




Here is the code:
{-# LANGUAGE ExistentialQuantification #-}
...



	I don't see the purpose of your BBind constructor. It seems to me that 
you could simply move the first three cases of runBehaviour 
implementation into your = and get rid of the constructor. Do you 
need that much laziness?




import Data.IORef
import System.Glib
import Graphics.UI.Gtk
import Control.Monad.Trans

type Event obj = IO () - IO (ConnectId obj)

data Behaviour a =
  forall b. BBind (Behaviour b) (b - Behaviour a)
  | BIO (IO a)
  | forall obj. GObjectClass obj = BWaitEvent (Event obj) (Behaviour a)

instance Monad Behaviour
 where action = generator = BBind action generator
   return a = BIO (return a)

instance MonadIO Behaviour
 where liftIO action = BIO action

runBehaviour :: Behaviour a - IO a
runBehaviour (BBind (BWaitEvent event after) f) = runBehaviour
(BWaitEvent event (after = f))
runBehaviour (BBind (BIO a) f) = a = \x - runBehaviour (f x)
runBehaviour (BBind (BBind a f) g) = runBehaviour (a = (\x - f x = g))
runBehaviour (BIO a) = a
runBehaviour (BWaitEvent event after) =
 do sigIdRef - newIORef (error You can't access sigIdRef before
signal is connected)
sigId - event $
  do sigId - readIORef sigIdRef
 signalDisconnect sigId
 runBehaviour after
 return ()
writeIORef sigIdRef sigId
return (error You can't expect result from behaviour)

waitEvent :: GObjectClass obj = Event obj - Behaviour ()
waitEvent event = BWaitEvent event (return ())

main :: IO ()
main =
  do initGUI
 window - windowNew
 onDestroy window mainQuit
 set window [windowTitle := Hello World]
 button - buttonNew
 let buttonB label =
   do liftIO $ set button [buttonLabel := label]
  waitEvent (onClicked button)
  buttonB (label ++ *)
 runBehaviour (buttonB *)
 set window [containerChild := button]
 widgetShowAll window
 mainGUI






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



--
Mario Blazevic
mblaze...@stilo.com
Stilo International

This message, including any attachments, is for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure, copying, or
distribution is strictly prohibited. If you are not the intended
recipient(s) please contact the sender by reply email and destroy
all copies of the original message and any attachments.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Arrow instance of Transducer (Was: [Haskell] ANN: Streaming Component Combinators 0.4)

2010-01-17 Thread Mario Blažević
 Stupid question: Is it related to Arrows?

 Not really. You might say it's more general than arrows, because 
 the streaming components are not restricted to a single input and 
 single output type. On the other hand, they are all specific to stream 
 processing, much like Fudgets and Yampa which are arrow-based.

 Arrows use tuple values for multiple inputs and outputs. Actually I'm 
 using arrows this way for signal processing.
 
 I suppose the Transducer data type [1] could be made an Arrow 
 instance, which would let me rename the 'connect' combinator [2] to 
 (). I'll have a look at Yampa to see if I can harvest more 
 combinator names, thank you!

After some investigation, I've concluded that Transducer cannot be made an
instance of Arrow in any way that's both practical and general.

The reason is that a Transducer converts a finite stream to another finite
stream of arbitrary length. If we
ignore effects, it's like a function of type [a] - [b].

When trying to define an instance of Arrow for

 newtype Transducer a b = Transducer ([a] - [b])

the arr and () methods are trivial:

 instance Arrow Transducer where
arr f = Transducer (map f)
Transducer t1  Transducer t2 = Transducer (t2 . t1)

but there is no satisfactory definition for the method first. A sensible
definition, IMO, would have to give rise to a pairwise (***), so if we have two
transducers t1 and t2 which respectively convert [a,b] to [c,d] and [1,2] to
[3,4], t1 *** t2 would have to map [(a,1), (b,2)] to [(c,3), (d,4)]. There is no
general way to define such a method first for the Transducer data type above. 
One
can try appending the right-hand sides of the input to a queue and adding them 
to
the outputs of the argument transducer:

first (Transducer t) = Transducer (uncurry (zip . t) . unzip)

but this works only for injective transducers that map each new input item to
exactly one output item. If the input
stream has finite length, the input and output stream must be of equal length.

In his paper Generalising Monads to Arrows from 1998, John Hughes defines 
a
similar Arrow instance for stream processors operating on infinite streams which
theoretically works for non-injective transforms, but leaks infinite space and
can't be adopted to finite stream processors.

HXT defines a different kind of an Arrow instance similar to the list monad
instance. This approach can combine non-injective transforms, but in it the t1
*** t2 example above would map the input [(a,1), (b,2)] to [(c,3), (c,4), (d,3),
(d,4)]. While this is a simple and lawful Arrow instance, I don't find it useful
for stream processing.

Yampa and other Functional Reactive Programming libraries associate a unique
timestamp to each stream item. Method first can use this timestamp to associate
input and output items even if the transducer leaves out some output items, and
if the time stamps are ordered in each stream I guess it could also handle extra
output items unrelated to any input. I'm not certain to what extent the FRP
transducers (i.e., signal processors) are allowed to do this.

To summarize: while I'd love the benefits of satisifying an Arrow instance,
the loss of generality seems too high for me. SCC transducers work on finite
streams of any values, and their output stream need not have any resemblance to
the input stream. I may define an Arrow-conformant subclass of Transducer in a
future version of SCC, perhaps operating on streams with timestamped values.


 
 [1] 

http://hackage.haskell.org/packages/archive/scc/0.4/doc/html/Control-Concurrent-SCC-Types.html#t%3ATransducer

 
 [2] 

http://hackage.haskell.org/packages/archive/scc/0.4/doc/html/Control-Concurrent-SCC-Combinators.html#v%3Aconnect

 
 In whatever way it is related to arrows, maybe you can mention it on the 
 project page.

I'll copy this message there.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Arrow instance of Transducer (Was: [Haskell] ANN: Streaming Component Combinators 0.4)

2010-01-17 Thread Mario Blažević
On Sun 17/01/10  4:47 PM , Twan van Laarhoven twa...@gmail.com sent:
 The Arrow class is too big, it includes too many things and it should be
 split up. This is yet another example. You can get some of the benefits of
 standard classes by making Transducer an instance of 
 Control.Category.Category, for
 which you only need to implement (.) and id.

Yes, perhaps I'll do that. Is there any relatively accepted package that
defines PointedCategory?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANN: Streaming Component Combinators 0.4

2010-01-15 Thread Mario Blažević

Henning Thielemann wrote:

Mario Blažević schrieb:

Version 0.4 of Streaming Component Combinators, or SCC for short, has
been released on Hackage. Get it at

http://hackage.haskell.org/package/scc

There isn't much new high-level functionality compared to the
previous version, but the implementation has been heavily refactored and
the foundations completely replaced.


Stupid question: Is it related to Arrows?


	Not really. You might say it's more general than arrows, because the 
streaming components are not restricted to a single input and single 
output type. On the other hand, they are all specific to stream 
processing, much like Fudgets and Yampa which are arrow-based.


	I suppose the Transducer data type [1] could be made an Arrow instance, 
which would let me rename the 'connect' combinator [2] to (). I'll 
have a look at Yampa to see if I can harvest more combinator names, 
thank you!



[1] 
http://hackage.haskell.org/packages/archive/scc/0.4/doc/html/Control-Concurrent-SCC-Types.html#t%3ATransducer
[2] 
http://hackage.haskell.org/packages/archive/scc/0.4/doc/html/Control-Concurrent-SCC-Combinators.html#v%3Aconnect

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forkSequence, runPar, parallelize

2009-12-09 Thread Mario Blažević
 
         I can't test it right now, but wouldn't the
  following do the job in the Identity monad?
 
  forkExec :: Identity a - Identity (Identity a)
  forkExec k = let result = runIdentity k
              in result `par` return (Identity result)
 
 
 Since Identity is a newtype, would that be equivalent to result `par`
 result? The forkExec in the IO monad let's other computations keep
 going until I need the result from the forked computation.


You're right, it doesn't seem to work the way I hoped. The equivalent function 
on
Maybe monad works, though, so it is possible to write forkExec in monads other
than IO.

 In a pure computation, I can already get the same result with `par`
 and laziness, right?

Yes. The goal is to enable writing monadic parallel computations which work 
under
any parallelizable monad. For example, I'm using it to run two trampolining
producer/consumer coroutines in parallel. A large majority of interesting
coroutines I have are completely agnostic with respect to the underlying monad.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-24 Thread Mario Blažević

 I recommend using -ddump-simpl, as it produces more readable output.
 
 Actually, I can't see any effect of that pragma in the
 core files whatsoever, but it certainly has effect on
 run time.
 
 How about diffing the whole core output (and using -ddump-simpl). If
 there's a performance difference then there must be a difference in the
 core code too.

Ok.

$ ghc-6.11.20090421 --make primes-test.hs -threaded -O2 -ddump-simpl  
main.simpl
$ time ./primes-test +RTS -N2
4001

real0m9.636s
user0m18.201s
sys 0m0.088s

$ ghc-6.11.20090421 --make primes-test.hs -threaded -O2 -ddump-simpl 
imported.simpl
$ time ./primes-test +RTS -N24001

real0m17.547s
user0m17.453s
sys 0m0.052s


I can't exactly use diff because the generated identifier names are not the 
same,
but after poring over with Emacs ediff I have found only one difference that's
not attributable to identifiers:

$diff main.simpl imported.simpl
...
223c232
   a_s1rs [ALWAYS Just L] :: GHC.Integer.Internals.Integer
---
   a_s1sV [ALWAYS Just S] :: GHC.Integer.Internals.Integer
...


Does this S vs. L difference have anything to do with strictness and laziness?
That line is a part of the `Main.test' definition:

$ diff -C 3 main.simpl imported.simpl
*** 217,244 
  [Arity 2
   Str: DmdType LL]
  Main.test =
!   \ (n1_ahR :: GHC.Integer.Internals.Integer)
! (n2_ahT :: GHC.Integer.Internals.Integer) -
  let {
!   a_s1rs [ALWAYS Just L] :: GHC.Integer.Internals.Integer
LclId
[Str: DmdType]
!   a_s1rs =
  Data.List.prod1
(Main.factors2
   (Data.List.prod1
! (GHC.Num.up_list Main.lvl Main.lvl n1_ahR) Data.List.lvl1))
Data.List.lvl1 } in
! case GHC.Prim.par# @ GHC.Integer.Internals.Integer a_s1rs
  of _ { __DEFAULT -
  case Data.List.prod1
 (Main.factors2
(Data.List.prod1
!  (GHC.Num.up_list Main.lvl Main.lvl n2_ahT) Data.List.lvl1))
 Data.List.lvl1
! of x1_aUS { __DEFAULT -
! case GHC.Real.$wdivMod x1_aUS a_s1rs of _ { (# ww1_aUo, _ #) -
! ww1_aUo
  }
  }
  }


 Or do you mean to say that *your* installation of GHC
 behaves the same when the function `parallelize' is defined in
 the same module and when it's imported?
 
 Yes, exactly. With both ghc-6.10.1 and a very recent build of ghc-6.11

As you can see, I'm using the 6.11.20090421 build. I looked at recent ones, but
the Linux builds seem to be less stable in May. I have the same results (though 
I
didn't use -ddump-simpl) with 6.8.2. Can you recommend a different version to 
try?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-23 Thread Mario Blažević

On Fri 22/05/09 10:51 AM , John Lato jwl...@gmail.com sent:
 Hi Mario,
 
 It looks like the parallelize function is getting inlined when it's in
 the same file, but not when it's in a separate file.
 
 Adding a {-# INLINE parallelize #-} pragma to the module with
 parallelize recovers all the performance for me.
 
 You could probably see exactly what's happening in more detail by
 going through the Core output.


Thank you, this advice helped. The Core output indicates that function `test'
evaluates the arguments to `parallelize' before it calls it. In other words, the
call to `parallelize' is optimized as a strict function call -- which it is. The
problem is that this optimization evaluates the arguments sequentially. 
Compiling
with optimizations turned off regains the parallel execution.

I guess I will report this as a GHC bug. Or is it a feature request?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-23 Thread Mario Blažević
 You could probably see exactly what's happening in
 more detail by going through the Core output.
 
 Thank you, this advice helped. The Core output indicates
 that function `test' evaluates the arguments to
 `parallelize' before it calls it. In other words, the
 call to `parallelize' is optimized as a strict function
 call -- which it is. The problem is that this
 optimization evaluates the arguments sequentially.
 Compiling with optimizations turned off regains the
 parallel execution.

 I guess I will report this as a GHC bug. Or is it a
 feature request?
 
 As Duncan suggessted, try with GHC head (grab a snapshot). `par` et al
 are much improved.

I already have, with the snapshot from 21st of April. It behaves the same
as 6.8.2, except it runs for twice as long.

I'd like to take back a part of what I said before, though: `parallelize' should
be strict only in its second argument. Its strictness in the first argument
should be the same as with `par`. Even though `parallelize x y'
always evaluates both x and y, the following test works fine with optimizations
even if `parallelize' is imported:

main = putStrLn (snd $ parallelize undefined Hello, World!)

So the function is not strict, and I don't understand why GHC should evaluate 
the
arguments before the call.

Does anybody know of a pragma or another way to make a function *non-strict* 
even
if it does always evaluate its argument? In other words, is there a way to
selectively disable the strictness optimization?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-23 Thread Mario Blažević
On Sat 23/05/09  4:15 PM , Alexander Dunlap alexander.dun...@gmail.com sent:
 Does anybody know of a pragma or another way to make a
 function *non-strict* even if it does always evaluate its
 argument? In other words, is there a way to
 selectively disable the strictness optimization?

 parallelize a b | False = (undefined, undefined)
                 | otherwise = a `par` (b `pseq` (a, b))

 might do, unless strictness analysis is smart enough to
 know that the False guard is always, well, False.
 ___
 
 GHC.Prim.lazy?

It's GHC.Exts.lazy nowadays, and it doesn't have any effect. Neither
does the `| False' guard. The only way I found to disable the eager
argument evaluation is to pass them in as functions:

 parallelize :: Num t = (t - a) - (t - b) - (a, b)
 parallelize a b = let a' = a 1
   b' = b 1
   in (a' `par` (b' `pseq` (a', b')))

Then it can be imported and called like this:

 test n1 n2 = let (p1, p2) = parallelize
(\n0- product $ factors $ product [n0..n1])
(\n0- product $ factors $ product [n0..n2])

This solution is incredibly fragile. If the declared type of parallelize 
is modified by replacing t by Integer, the evaluation becomes eager again.
Also, the argument functions can't simply take () for argument which
would make this solution reasonable.

If this is all the strictness optimizer's fault, I'm awed by how
difficult it is to trick it into not doing its job.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: Re: [Haskell-cafe] Re: A problem with par and modules boundaries...

2009-05-23 Thread Mario Blažević
On Sat 23/05/09  2:51 PM , Duncan Coutts duncan.cou...@worc.ox.ac.uk sent:
 On Sat, 2009-05-23 at 13:31 -0400, Mario Blažević wrote:
 ...
 So the function is not strict, and I don't understand
 why GHC should evaluate the arguments before the call.
 
 Right, it's lazy in the first and strict in the second argument. As far
 as I can see we have no evidence that is is evaluating anything before
 the call.


When I look at the Core definition of `test', it begins with


\ (n1axl::integer:GHCziIntegerziInternals.Integer)
  (n2axn::integer:GHCziIntegerziInternals.Integer) -
%let as1sU :: integer:GHCziIntegerziInternals.Integer =
   base:DataziList.prod1
   (main:Main.factors2
(base:DataziList.prod1
 (base:GHCziNum.upzulist main:Main.lvl main:Main.lvl n1axl)
 base:DataziList.lvl1))
   base:DataziList.lvl1
%in %case integer:GHCziIntegerziInternals.Integer 
(ghczmprim:GHCziPrim.parzh
   @
integer:GHCziIntegerziInternals.Integer
   as1sU)
%of (dsapq::ghczmprim:GHCziPrim.Intzh)


To my untrained eyes, this looks like it's evaluating

 product $ factors $ product [1..n1])

which is the first argument to `parallelize'. I assume that %case in
Core evaluates the argument to WHNF, just like case in Haskell.

Then again, I could be completely misinterpreting what Core is, because
I can't find any call to `parallelize' before or after that. It appears
to be inlined in Core, regardless of whether the pragma

 {-# INLINE parallelize #-}

is there or not. Actually, I can't see any effect of that pragma in the
core files whatsoever, but it certainly has effect on run time.

 Does anybody know of a pragma or another way to make a
 function *non-strict* even if it does always evaluate its argument?
 In other words, is there a way to selectively disable the strictness
 optimization?
 
 Yes, which is what pseq and par already do.
 
 If there's a bug, we need to reproduce it and report it. I cannot
 reproduce it.

If you mean that you can't reproduce anything that's contrary to the
specification, that's not saying much: there are practically no guarantees on
what `par' is supposed to accomplish. If you mean you can't reproduce anything
you wouldn't expect, pray explain what is going on, because everybody else seems
to be surprised. Or do you mean to say that *your* installation of GHC behaves
the same when the function `parallelize' is defined in the same module and when
it's imported?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A problem with par and modules boundaries...

2009-05-21 Thread Mario Blažević
I'll cut to the chase. The short program below works perfectly: when I compile 
it
with -O2 -threaded and run with +RTS -N2 command-line options, I get a nearly 
50%
real-time improvement:

$ time ./primes-test +RTS -N2
5001

real0m9.307s
user0m16.581s
sys 0m0.200s

However, if I move the `parallelize' definition into another module and import
that module, the performance is completely lost:

$ time ./primes-test +RTS -N2
5001

real0m15.282s
user0m15.165s
sys 0m0.080s

I'm confused. I know that `par` must be able work across modules boundaries,
because Control.Parallel.Strategies is a module and presumably it works. What am
I doing wrong?


 module Main where
 
 import Control.Parallel
 import Data.List (find)
 import Data.Maybe (maybe)
 
 --import Parallelizable
 parallelize a b = a `par` (b `pseq` (a, b))
 
 test :: Integer - Integer - Integer
 test n1 n2 = let (p1, p2) = parallelize
(product $ factors $ product [1..n1])
(product $ factors $ product [1..n2])
  in p2 `div` p1
 
 factors n = maybe [n] (\k- (k : factors (n `div` k)))
   (find (\k- n `mod` k == 0) [2 .. n - 1])
 
 main = print (test 5000 5001)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multi-parameter type class woes

2008-12-15 Thread Mario Blažević

 I think that 
 http://www.haskell.org/pipermail/haskell-cafe/2008-April/041461.html
 may be relevant. It's a design decision.

Thanks for the link. I've read through the thread, but rather than try to 
figure out if it's the same issue and whether it's a design decision or a 
historical accident, I've decided to create a ticket (#2885) and let GHC 
developers decide if it's valid or not.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Multi-parameter type class woes

2008-12-14 Thread Mario Blažević
I have, for a change, a relatively simple problem with type classes. Can 
somebody explain to me, or point me to an explanation of the behaviour I see?

Here is a short and useless example:

  {-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-}

   import Data.Maybe

   class Container x y where
  wrapper :: x - Bool
  unwrap :: x - y
  rewrap :: y - x

   liftWrap :: Container x y = (y - y) - (x - x)
   liftWrap f x = (if wrapper x then rewrap . f . unwrap else id) x

   instance Container (Maybe x) x where
  wrapper = isJust
  unwrap = fromJust
  rewrap = Just

   main = print (liftWrap (succ :: Int - Int) (Just 1 :: Maybe Int))

GHC 6.10.1 refuses to typecheck the 'wrapper' function in definition of 
'liftWrap', with the following error message:

Could not deduce (Container x y) from the context (Container x y1)
  arising from a use of `wrapper' at Test.hs:11:22-30
Possible fix:
  add (Container x y) to the context of
the type signature for `liftWrap'
In the expression: wrapper x
In the expression:
(if wrapper x then rewrap . f . unwrap else id) x
In the definition of `liftWrap':
liftWrap f x = (if wrapper x then rewrap . f . unwrap else id) x

Let me clarify that I'm aware that in this particular example a functional 
dependecy should be used. Also, I can think of a few workarounds for my actual 
problem, so I'm not asking for any solutions. I'm looking for an explanation. 
It bugs me that my intuition of how this type class should have worked is 
completely wrong. The error message does not help, to put it mildly. Where 
should I go, what should I read?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re: [Haskell-cafe] Multi-parameter type class woes

2008-12-14 Thread Mario Blažević
 I'll take a swing at this one:
 
 instance Container (Maybe x) [x] where
 wrapper = isNothing
 . . .
 
 That isn't a sensible definition of 'wrapper', but I believe without 
 trying to compile it is completely legal.  Which wrapper do you use?
 
 You /don't/ have a different matching Container instance, but without the 
 functional dependency you /might/, and ghc barfs.


But liftWrap doesn't require any particular instance, it's a 
generic function accepting any pair of types for which there is 
an instance of Container. Instance selection (as I understand it)
shouldn't come into play until one applies liftWrap to a
particular type, and indeed it does cause problems there: note
the type annotations on the last line. That part I understand
and accept, or at least have learned to live with.


 On Sun, 14 Dec 2008, Mario Bla?evi? wrote:
 
 I have, for a change, a relatively simple problem with
 type classes. Can somebody explain to me, or point me to an explanation of
 the behaviour I see?

 Here is a short and useless example:

  {-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-}

   import Data.Maybe

   class Container x y where
  wrapper :: x - Bool
  unwrap :: x - y
  rewrap :: y - x

   liftWrap :: Container x y = (y - y) - (x - x)
   liftWrap f x = (if wrapper x then rewrap . f . unwrap else id) x

   instance Container (Maybe x) x where
  wrapper = isJust
  unwrap = fromJust
  rewrap = Just

   main = print (liftWrap (succ :: Int - Int) (Just 1 :: Maybe Int))

 GHC 6.10.1 refuses to typecheck the 'wrapper' function
 in definition of 'liftWrap', with the following error message:

Could not deduce (Container x y) from the context (Container x y1)
  arising from a use of `wrapper' at Test.hs:11:22-30
Possible fix:
  add (Container x y) to the context of
the type signature for `liftWrap'
In the expression: wrapper x
In the expression:
(if wrapper x then rewrap . f . unwrap else id) x
In the definition of `liftWrap':
liftWrap f x = (if wrapper x then rewrap . f . unwrap else id) x

 Let me clarify that I'm aware that in this particular
 example a functional dependecy should be used. Also, I can think of a few
 workarounds for my actual problem, so I'm not asking for any solutions. I'm
 looking for an explanation. It bugs me that my intuition of how this type
 class should have worked is completely wrong. The error message does not
 help, to put it mildly. Where should I go, what should I read?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (no subject)

2008-09-06 Thread Mario Blažević
Hello. I'm trying to apply the nested regions (as in Lightweight Monadic 
Regions by Oleg Kiselyov and Chung-chieh Shan) design pattern, if that's the 
proper term. I was hoping to gain a bit more type safety in this little library 
I'm working on -- Streaming Component Combinators, available on Hackage. I 
guess the problem is that I'm getting too much type safety now, because I can't 
get the thing to compile. Most of the existing code works, the only exceptions 
seem to be various higher-order functions. I've reduced the problem to several 
lines of Literate Haskell code below, can anybody think of a solution or a 
reason there can't be one?

 {-# LANGUAGE MultiParamTypeClasses, EmptyDataDecls, Rank2Types #-}
 {-# LANGUAGE FunctionalDependencies, FlexibleInstances, IncoherentInstances 
 #-}
 module Main where
 main = undefined

I'll call the main type, originally a monad transformer, simply Region. I'm 
leaving out the Monad and MonadTransformer instances, because they don't 
contribute to the problem. The parameter r is the phantom region type.

 newtype Region r a = Region a

The Ancestor class is used to denote relationship between two regions where one 
is nested in another.

 data Child r

 class Ancestor r r'

 instance   Ancestor r (Child r)
 instance Ancestor r1 r2 = Ancestor r1 (Child r2)

Handle is a simple wrapper around a value. It carries information about the 
region that originates the value.

 data Handle r x = Handle x

A typical calculation in the Region monad will take a bunch of Handles 
inherited from an Ancestor region and do something with them. The Ancestor 
constraint is there to ensure that the handles are not fake but genuinely 
inherited.

 type SingleHandler x y = forall r1s rs. Ancestor r1s rs =
  Handle r1s x - Region rs y
 type DoubleHandler x y z = forall r1d r2d rd. (Ancestor r1d rd, Ancestor r2d 
 rd) =
Handle r1d x - Handle r2d y - Region rd z

And now I'm getting to the problem. The following higher-order function doesn't 
type-check:

 mapD :: (SingleHandler x z - SingleHandler y z)
 - DoubleHandler x w z - DoubleHandler y w z
 mapD f d = \y w- f (\x- d x w) y

I get the same error from GHC 6.8.2 and 6.8.2:

Test.lhs:36:28:
Could not deduce (Ancestor r2d rs)
  from the context (Ancestor r1s rs)
  arising from a use of `d' at Test.lhs:36:28-32
Possible fix:
  add (Ancestor r2d rs) to the context of
the polymorphic type
  `forall r1s rs. (Ancestor r1s rs) = Handle r1s x - Region rs z'
In the expression: d x w
In the first argument of `f', namely `(\ x - d x w)'
In the expression: f (\ x - d x w) y

The same code compiles just fine if all the Ancestor constraints are removed. I 
don't see any place to add the extra (Ancestor r2d rs) constraint, as GHC 
recommends. I think it ought to be able to figure things out based on the 
exisisting constraints, but I may be wrong: perhaps higher-order functions pose 
an insurmountable problem for type-level programming in Haskell. Can anybody 
shed any light on this?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A problem with nested regions and higher-order functions

2008-09-06 Thread Mario Blažević
I forgot the subject, sorry for reposting...

I'm trying to apply the nested regions (as in Lightweight Monadic Regions by 
Oleg Kiselyov and Chung-chieh Shan) design pattern, if that's the proper term, 
in hope to gain a bit more type safety in this little library I'm working on 
(Streaming Component Combinators, available on Hackage). I guess the problem is 
that I'm getting too much type safety now, because I can't get the thing to 
compile. Most of the existing code works, the only exceptions seem to be 
various higher-order functions. I've reduced the problem to several lines of 
Literate Haskell code below, can anybody find a solution or confirm there isn't 
one?

 {-# LANGUAGE EmptyDataDecls, Rank2Types #-}
 {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, 
 FlexibleInstances, IncoherentInstances #-}
 module Main where
 main = undefined

I'll call the main type, originally a monad transformer, simply Region. I'm 
leaving out the Monad and MonadTransformer instances, because they don't 
contribute to the problem. The parameter r is the phantom region type.

 newtype Region r a = Region a

The Ancestor class is used to denote relationship between two regions where one 
is nested in another.

 data Child r

 class Ancestor r r'

 instance   Ancestor r (Child r)
 instance Ancestor r1 r2 = Ancestor r1 (Child r2)

Handle is a simple wrapper around a value. It carries information about the 
region that originates the value.

 data Handle r x = Handle x

A typical calculation in the Region monad will take a bunch of Handles 
inherited from an Ancestor region and do something with them. The Ancestor 
constraint is there to ensure that the handles are not fake but genuinely 
inherited.

 type SingleHandler x y = forall r1s rs. Ancestor r1s rs =
  Handle r1s x - Region rs y
 type DoubleHandler x y z = forall r1d r2d rd. (Ancestor r1d rd, Ancestor r2d 
 rd) =
Handle r1d x - Handle r2d y - Region rd z

And now I get to the problem. The following higher-order function doesn't 
type-check:

 mapD :: (SingleHandler x z - SingleHandler y z)
 - DoubleHandler x w z - DoubleHandler y w z
 mapD f d = \y w- f (\x- d x w) y

I get the same error from GHC 6.8.2 and 6.8.2:

Test.lhs:36:28:
Could not deduce (Ancestor r2d rs)
  from the context (Ancestor r1s rs)
  arising from a use of `d' at Test.lhs:36:28-32
Possible fix:
  add (Ancestor r2d rs) to the context of
the polymorphic type
  `forall r1s rs. (Ancestor r1s rs) = Handle r1s x - Region rs z'
In the expression: d x w
In the first argument of `f', namely `(\ x - d x w)'
In the expression: f (\ x - d x w) y

The same code compiles just fine if all the Ancestor constraints are removed. I 
don't see any place to add the extra (Ancestor r2d rs) constraint, as GHC 
recommends. I think it ought to be able to figure things out based on the 
exisisting constraints, but I may be wrong: perhaps higher-order functions pose 
a fundamental problem for type-level programming. Can anybody shed any light on 
this?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Control.Concurrent.forkIO versus Control.Parallel.par

2008-07-27 Thread Mario Blažević

Hello. I have a question about parallel computation in Haskell. After 
browsing the GHC library documentation, I was left with impression that there 
are two separate mechanisms for expressing concurrency: Control.Parallel.par 
for pure computations and Control.Concurrent.forkIO for computations in IO 
monad.

This dichotomy becomes a problem when one tries to use concurrency from a 
monad transformer, though I'm sure that's not the only such situation. One 
cannot assume that the base monad is IO so forkIO cannot be used, while 
Control.Parallel.par won't run monads. My first solution was to replace the 
base monad class for the monad transformer by the following ParallelizableMonad 
class:


class Monad m = ParallelizableMonad m where
   parallelize :: m a - m b - m (a, b)
   parallelize ma mb = do a - ma
  b - mb
  return (a, b)

instance ParallelizableMonad Identity where
   parallelize (Identity a) (Identity b) = Identity (a `par` (b `pseq` (a, b)))

instance ParallelizableMonad IO where
   parallelize ma mb = do va - newEmptyMVar
  vb - newEmptyMVar
  forkIO (ma = putMVar va)
  forkIO (mb = putMVar vb)
  a - takeMVar va
  b - takeMVar vb
  return (a, b)


I tested this solution, and it worked for IO computations in the sense that 
they used both CPUs. The test also ran slower on two CPUs that on one, but 
that's beside the point.

Then I realized that par can, in fact, be used on any monad, it just needs a 
little nudge:


parallelize :: m a - m b - m (a, b)
parallelize ma mb = let a = ma = return
b = mb = return
in a `par` (b `pseq` liftM2 (,) a b)


However, in this version the IO monadic computations still appear to use only 
one CPU. I cannot get par to parallelize monadic computations. I've used the 
same command-line options in both examples: -O -threaded and +RTS -N2. What am 
I missing?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe