Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Evan Laforge
On Sun, Apr 1, 2012 at 3:27 AM, Roman Cheplyaka  wrote:
> It's useful to mention the limitations of this package, so that people
> know what to expect and don't spend their time testing it to understand
> that it doesn't suit their needs.

Good point, I'll put the limitations and TODO stuff into the package
description.

> For example:
>  doesn't generate tags for definitions without type signatures

That was a conscious decision, though now that I think about it I
could assume they're unexported and use vim's static tags for those
definitions.  I don't know about the most common case, but I almost
always have signatures on top level definitions, and I don't really
feel like I need tags for where or let-bound definitions.

>  doesn't understand common extensions, such as type families

Oops, that was an oversight.  Should be easy enough to fix.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Evan Laforge
> * haskell-src-exts is not slow. It can parse a 769 module codebase racking up
>  to 100k lines of code in just over a second on my machine. That's
>  good. Also, I don't think speed of the individual file matters, for
>  reasons I state below.

Wow, that's faster than my machine.

> * Broken source is not a big issue to me. Code is written with a GHCi session
>  on-hand; syntactic issues are the least of my worries. I realise it
>  will be for others.

I do to, but my usual practice is to have ghci in another window, save
the file, and hit :r over there.  So it's distracting when the tags
program spits out a bunch of syntax errors, I'm used to seeing those
in ghci.  And I save somewhat compulsively :)

> The problem with haskell-src-exts is that it refuses to parse expressions for
> which it cannot reduce the operator precedence, meaning it can't parse any
> module that uses a freshly defined operator.

Oh right, I remember having that problem too.

> The reason I don't think individual file performance matters is that
> the output can be cached. There's also the fact that if I modify a
> file, and generate tags, I'm likely editing that file presently, and
> I'm not likely to need jumping around which tags provides.

It's true for me too, though I like the convenience of retagging on
every single save.  But it's true that given incremental tags
regeneration, haskell-src-exts is plenty fast too.  I didn't put a lot
of thought into the name, but I mostly just wanted tags I could run on
every save.

>> Then there's the venerable hasktags, but it's buggy and the source
>> is a mess. I fixed a bug where it doesn't actually strip comments
>> so it makes tags to things inside comments, but then decided it
>> would be easier to just write my own.
>
> Hasktags is hardly buggy in my experience. The comments bug is minor. But I
> agree that the codebase is messy and would be better handled as
> Text. But again, speed on the individual basis isn't a massive issue here.

The comments thing was really big for me.  It made it miss a lot of tags.

> Unfortunately there appears to be a horrific problem with it, as the
> log below shows:

Ouch.  I probably have some kind of laziness problem in there.

I'll try downloading some stuff from cabal and try it on some
different codebases.

> I like the fast-tags codebase so it would be nice to start using it,
> but I hope you can test it on either a more substantial codebase or
> just a different codebase. Or just grab some packages from Hackage and
> test. Emacs support would be nice, I might add it myself if you can
> fix the performance explosion. Right now hasktags is OK for me. I won't be
> hacking on it in the future for more features because…

Will do.  I also realized 'x, y :: ' type stuff doesn't work.  And it
might be nice to support internal definitions and use vim's "static
tag" feature.

> While we're on the topic I think haskell-src-exts is worth investing
> time in, as it has semantic knowledge about our code. I am trying to
> work on it so that it can preserve comments and output them, so that
> we can start using it to pretty print our code, refactor our code,
> etc. It could also be patched to handle operators as Operators [Exp]
> rather than OpApp x (OpApp y), etc. I think.

Oh I agree haskell-src-exts is great and I love it.  I used it for
fix-imports.  It does support comments, but dealing with them was a
pain because they just have line numbers and you have to do some work
to figure out which bit of source they are "attached" to.  Part of the
problem is, of course, that "attached" is a fuzzy concept, but there
could definitely be some tools to make it easier.

Actually, if haskell-src-exts had a lenient parsing mode then it would
be easier to use and less buggy than a hand-written thing.

On Sun, Apr 1, 2012 at 1:44 PM, Levent Erkok  wrote:
> Chris: You might be experiencing this issue:
> http://hackage.haskell.org/trac/ghc/ticket/5783

I'm guessing not, since he was using 0.0.2, which has the version
constraint.  And his symptoms are different.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is this a correct explanation of FRP?

2012-04-01 Thread Ertugrul Söylemez
Peter Minten  wrote:

> Sorry, I don't understand this. Would it be correct to say that AFRP
> shares the basic ideas of FRP in that it has behaviors and
> events/signals and that the main difference comes from the way AFRP is
> implemented?

Well, FRP is usually interpreted as dealing with time-varying values.
The main selling point of FRP is the ability to combine those values
like ordinary ones and let them react to events.

AFRP offers the same functionality, but the underlying idea is
different.  To the user the difference becomes apparent when combining
those special values (whatever you call them, I always thought
"behavior" is a bad name).  Also the values can implement certain
semantics which would be impossible in the traditional concept, like a
frame counter.


> As I see FRP it has three components: the basic concepts, the
> underlying theory and the way the libraries actually work.
>
> As far as I understand FRP (which is not very far at all) the basic
> concepts can, simplified, be formulated as:
>
> * There are things which have a different value depending on when you
> look at them. (behaviors)

That's already specific to traditional FRP.  In AFRP the value mutates.
It's not a function of some notion of time.  It is similar to a list.
That list contains the current value as well as a description of the
future of the value:

newtype SF a b = SF (a -> (b, SF a b))

The current value and the future depend on a momentary input value of
type 'a' (which usually comes from another SF).


> "Normal" FRP theory expresses behaviors as "Time -> a" and events as
> "[(Time,a)]". AFRP uses some kind of "signal function" to express
> behaviors, or behaviors are signal functions and those functions
> interact with events. Anyway AFRP uses a completely different
> theoretical way of thinking about events and behaviors.

A behavior from traditional FRP is a special case of a signal function.
It's a 'stateless' signal function, i.e. one that never mutates.  In
both cases you would use switching combinators to react to events.


> Netwire also uses AFRP but extends the theory with something called
> signal inhibition. Like everything else it shares the basic concepts
> of FRP.

No, Netwire does things very differently.  Note the total absence of
switching combinators.  Where in traditional FRP and regular AFRP you
have events and switching in Netwire you have signal inhibition and
selection.  AFRP is really just changes the theory to establish some
invariants.  Netwire changes the whole paradigm.  Review alterTime as
expressed in the Netwire framework:

alterTime = fullTime <|> halfTime

This isn't switching.  It's selection.  If fullTime decides to be
productive, then alterTime acts like fullTime.  Otherwise it acts like
halfTime.  If both inhibit, then alterTime inhibits.  This allows for a
much more algebraic description of reactive systems.


Greets,
Ertugrul


-- 
nightmare = unsafePerformIO (getWrongWife >>= sex)
http://ertes.de/


signature.asc
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generalizing (++) for monoids instead of using (<>)

2012-04-01 Thread Thomas DuBuisson
On Sun, Apr 1, 2012 at 1:58 PM, aditya bhargava
 wrote:
> After asking this question:
> http://stackoverflow.com/questions/9963050/standard-way-of-joining-two-data-texts-without-mappend
>
> I found out that the new infix operator for `mappend` is (<>). I'm wondering
> why ghc 7.4 didn't generalize (++) to work on monoids instead.

Such decisions should really be made by the Haskell Prime committee
(vs GHC HQ).  In Haskell there is a continuing tension between making
things polymorphic and to keep the prelude functions monomorphic so
they generate simple error messages (among other arguments).  At the
point, the additional argument of any new definition of "Haskell"
remaining backwards compatible also holds weight and this slows the
rate-of-change.

This is not a new issue, there are a number of functions that could be
defined more generally (common example: map/fmap).  The problem making
such changes is a matter of consensus and will to see things though.

Cheers,
Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Generalizing (++) for monoids instead of using (<>)

2012-04-01 Thread Daniel Peebles
There are many reasons, but some of the more cited ones are that (<>) will
break less code than (++) would, since (++) is ubiquitous and (<>) is most
used in some pretty printers. Yes, mappend's type can be refined to that of
the current list (++), but the increased polymorphism still has the
potential to break existing code by making it harder to resolve instances.

As for (<>) meaning not equal to, do you also have a problem with Monad's
(>>) meaning a right bitwise shift, or the mutationey form of it, (>>=)? :)
I don't think anyone in Haskell has ever used (<>) to mean (/=), so the
fact that there exist a couple of languages out there that do use it that
way shouldn't affect our decision.

Dan

On Sun, Apr 1, 2012 at 4:58 PM, aditya bhargava wrote:

> After asking this question:
>
> http://stackoverflow.com/questions/9963050/standard-way-of-joining-two-data-texts-without-mappend
>
> I found out that the new infix operator for `mappend` is (<>). I'm
> wondering why ghc 7.4 didn't generalize (++) to work on monoids instead. To
> me, (++) is much more clear. (<>) means "not equal to" for me. Can anyone
> shed light on this decision?
>
>
> Adit
>
> --
> adit.io
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Generalizing (++) for monoids instead of using (<>)

2012-04-01 Thread aditya bhargava
After asking this question:
http://stackoverflow.com/questions/9963050/standard-way-of-joining-two-data-texts-without-mappend

I found out that the new infix operator for `mappend` is (<>). I'm
wondering why ghc 7.4 didn't generalize (++) to work on monoids instead. To
me, (++) is much more clear. (<>) means "not equal to" for me. Can anyone
shed light on this decision?


Adit

-- 
adit.io
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Levent Erkok
Chris: You might be experiencing this issue:
http://hackage.haskell.org/trac/ghc/ticket/5783

Upgrading text and recompiling fast-tags should take care of this problem.

-Levent.

On Sun, Apr 1, 2012 at 10:12 AM, Christopher Done
 wrote:
> By the way, I'm assuming that this library isn't an April Fools joke
> by making a library called “fast” with explosive O(n²) time problems.
> :-P
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Towards a single, unified API for incremental data processing

2012-04-01 Thread John Millikin
There are currently several APIs for processing strict monoidal values
as if they were pieces of a larger, lazy value. Some of the most
popular are based on Oleg's left-fold enumerators, including the
"iteratee", "enumerator", "iterIO". Other choices include "comonads",
"conduits", and "pipes".

Despite having various internal implementations and semantics, these
libraries generally export a similar-looking API. This is a terrible
duplication of effort, and it causes dependant packages to be strongly
tied to the underlying implementation.

I propose that a new package, "tzinorot", be written to provide a
single API based on Data.List. It should be pretty easy to use,
requiring only a few common extensions to the type system.

For example, the enumerator package's 'mapM' function could be
generalized for use in tzinorot through a few simple modifications to
the type signature:

--
-- enumerator mapM
mapM :: Monad m => (ao -> m ai) -> Enumeratee ao ai m b

-- tzinorot mapM
mapM :: (Monad m, Tzinorot t, ListLike l1 a1, ListLike l2 a2) => (l1
a1 -> m (l2 a2)) -> t Void s (TzinorotItems (l1 a1)) (TzinorotItems
(l2 a2)) m r
--

To make it easier to install and use the tzinorot package, it will
depend on all of its supported implementations (iteratee, enumerator,
conduits, pipes, etc), and use Michael Snoyman's "cabala" tool to
manage dependency versions. See the cabala announcement for details on
use:

http://www.yesodweb.com/blog/2012/04/replacing-cabal

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: acme-http

2012-04-01 Thread Jeremy Shaw
On Sun, Apr 1, 2012 at 12:48 PM, Michael Snoyman  wrote:

> That's awesome! I think you should pair this up with the /dev/null
> datastore and then you'll be truly webscale!

Well, acid-state does have a backend that skips writing any
transaction logs to disk making it pure memory based:

http://hackage.haskell.org/packages/archive/acid-state/0.6.3/doc/html/Data-Acid-Memory.html

So, that is a bit like a /dev/null data store. It works really great
as long as your app never restarts :)

- jeremy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: acme-http

2012-04-01 Thread Michael Snoyman
On Sun, Apr 1, 2012 at 8:46 PM, Jeremy Shaw  wrote:
> Hello,
>
> As we all know, the true measure of performance for a web server is
> the classic PONG test. And, so the Happstack team is pleased to
> announce the release of the new acme-http server!
>
> hackage:
>  http://hackage.haskell.org/package/acme-http
>
> source:
>  http://patch-tag.com/r/stepcut/acme-http
>
> When testing on my laptop with +RTS -N4 using the classic PONG test:
>
>  $ httperf --hog -v --server 127.0.0.1 --port 8000 --uri /
> --num-conns=1000 --num-calls=1000 --burst-length=20 --rate=1000
>
> acme-http delivered  221,693.0 req/s, making it the fastest Haskell
> web server on the planet.
>
> By comparison, warp delivered 51,346.6 req/s on this machine.
>
> The secret to acme-http's success is that it large avoids doing
> anything not required to win the PONG benchmark. It does not support
> timeouts, it does not check quotas, it assumes the client is HTTP 1.1,
> it does not catch exceptions, and it responds to every single request
> with PONG.
>
> The goal of acme-http is two fold:
>
>  1. determine the upper-bound on Haskell  web-server performance
>  2. push that upper bound even higher
>
> In regards to #1, we have now established the current upper limit at
> 221,693.0 req/s.
>
> In regards to #2, I believe acme-http will be useful as a place to
> investigate performance bottlenecks. It is very small, only 250 lines
> of code or so. And many of those lines deal with pretty-printing, and
> other non-performance related tasks. Additionally, it works in the
> plain IO monad. It does not use conduits, enumerators, pipes, or even
> lazy IO. As, a result, it should be very easy to understand, profile,
> and benchmark.
>
> In providing such a simple environment and avoiding as much extra work
> as possible we should be able to more easily answer questions like
> "Why is so much RAM required?", "What is limiting the number of
> connections per second", etc.
>
> As we address these issues in acme-http, we can hopefully bring
> solutions back to practical frameworks, or to the underlying GHC
> implementation itself.
>
> If performance tuning is your thing, I invite you to check out
> acme-http and see if you can raise the limit even higher!
>
> - jeremy
>
> --
> You received this message because you are subscribed to the Google Groups 
> "HAppS" group.
> To post to this group, send email to ha...@googlegroups.com.
> To unsubscribe from this group, send email to 
> happs+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/happs?hl=en.
>

That's awesome! I think you should pair this up with the /dev/null
datastore and then you'll be truly webscale!

Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: acme-http

2012-04-01 Thread Jeremy Shaw
Hello,

As we all know, the true measure of performance for a web server is
the classic PONG test. And, so the Happstack team is pleased to
announce the release of the new acme-http server!

hackage:
  http://hackage.haskell.org/package/acme-http

source:
  http://patch-tag.com/r/stepcut/acme-http

When testing on my laptop with +RTS -N4 using the classic PONG test:

 $ httperf --hog -v --server 127.0.0.1 --port 8000 --uri /
--num-conns=1000 --num-calls=1000 --burst-length=20 --rate=1000

acme-http delivered  221,693.0 req/s, making it the fastest Haskell
web server on the planet.

By comparison, warp delivered 51,346.6 req/s on this machine.

The secret to acme-http's success is that it large avoids doing
anything not required to win the PONG benchmark. It does not support
timeouts, it does not check quotas, it assumes the client is HTTP 1.1,
it does not catch exceptions, and it responds to every single request
with PONG.

The goal of acme-http is two fold:

 1. determine the upper-bound on Haskell  web-server performance
 2. push that upper bound even higher

In regards to #1, we have now established the current upper limit at
221,693.0 req/s.

In regards to #2, I believe acme-http will be useful as a place to
investigate performance bottlenecks. It is very small, only 250 lines
of code or so. And many of those lines deal with pretty-printing, and
other non-performance related tasks. Additionally, it works in the
plain IO monad. It does not use conduits, enumerators, pipes, or even
lazy IO. As, a result, it should be very easy to understand, profile,
and benchmark.

In providing such a simple environment and avoiding as much extra work
as possible we should be able to more easily answer questions like
"Why is so much RAM required?", "What is limiting the number of
connections per second", etc.

As we address these issues in acme-http, we can hopefully bring
solutions back to practical frameworks, or to the underlying GHC
implementation itself.

If performance tuning is your thing, I invite you to check out
acme-http and see if you can raise the limit even higher!

- jeremy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Christopher Done
By the way, I'm assuming that this library isn't an April Fools joke
by making a library called “fast” with explosive O(n²) time problems.
:-P

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Modest Records Proposal

2012-04-01 Thread Christopher Done
I actually read the first couple paragraphs and thought “sounds
interesting I'll read it later”. After reading it properly, I lol'd.

> After some initial feedback, I'm going to create a page for the
> Homotopy Extensional Records Proposal (HERP) on trac. There are really
> only a few remaining questions. 1) Having introduced homotopies, why
> not go all the way and introduce dependent records? In fact, are HERP
> and Dependent Extensional Records Proposal (DERP) already isomorphic?
> My suspicion is that HERP is isomorphic, but DERP is not.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Modest Records Proposal

2012-04-01 Thread Greg Weber
Obviously Gregory is not familiar with Homotopy. In fact, its
isomorphism predicts that if someone named Greg is involved in a
discussion, someone named Gregory will also become involved.

Or that is what I get for responding to an e-mail without reading it
on April 1st :)

On Sun, Apr 1, 2012 at 7:40 AM, Gregory Collins  wrote:
> Whoosh? :-)
>
> On Sun, Apr 1, 2012 at 3:54 PM, Greg Weber  wrote:
>>
>> Hi Gershom,
>>
>> This sounds very interesting even if I have no idea what you are
>> talking about :)
>> Please create a proposal linked from this page:
>> http://hackage.haskell.org/trac/ghc/wiki/Records
>> The first thing you should probably do is explain the programmer's
>> point of view. That ensures that we are all going through the
>> requirements phase correctly.
>> I can assure you that haskell prime would not accept a records change
>> until it is first implemented in GHC or another Haskell compiler.
>>
>> Thanks,
>> Greg Weber
>>
>> On Sat, Mar 31, 2012 at 11:14 PM, Gershom B  wrote:
>> > The records discussion has been really complicated and confusing. But
>> > I have a suggestion that should provide a great deal of power to
>> > records, while being mostly[1] backwards-compatible with Haskell 2010.
>> > Consider this example:
>> >
>> >    data A a = A{a:a, aa::a, aaa :: a -> A (a -> a)}
>> >    data B a = B{aaa :: a -> A (a -> a), a :: A}
>> >
>> > Now what is the type of this?
>> >
>> >     a a aa = a{a = a, aaa = aa}
>> >
>> > Using standard Haskell typeclasses this is a difficult question to
>> > answer. The types of  for A and B do not unify in an obvious way.
>> > However, while they are intensionally quite distinct, they unify
>> > trivially extensionally. The obvious thing to do is then to extend the
>> > type system with extensional equality on record functions.
>> >
>> > Back when Haskell was invented, extensional equality was thought to be
>> > hard. But purity was thought to be hard too, and so were Monads. Now,
>> > we know that function existentionality is easy. In fact, if we add the
>> > Univalence Axiom to GHC[2], then this is enough to get function
>> > existensionality. This is a well-known result of Homotopy Type
>> > Theory[3], which is a well-explored approach that has existed for at
>> > least a few years and produced more than one paper[4]. Homotopy Type
>> > Theory is so sound and well understood that it has even been
>> > formalized in Coq.
>> >
>> > Once we extend GHC with homotopies, it turns out that records reduce
>> > to mere syntactic sugar, and there is a natural proof of their
>> > soundness (Appendix A). Furthermore, there is a canonical projection
>> > for any group of fields (Appendix B). Even better, we can make "."
>> > into the identity path operator, unifying its uses in composition and
>> > projection. In fact, with extended (parenthesis-free) section rules,
>> > "." can also be used to terminate expressions, making Haskell friendly
>> > not only to programmers coming from Java, but also to those coming
>> > from Prolog!
>> >
>> > After some initial feedback, I'm going to create a page for the
>> > Homotopy Extensional Records Proposal (HERP) on trac. There are really
>> > only a few remaining questions. 1) Having introduced homotopies, why
>> > not go all the way and introduce dependent records? In fact, are HERP
>> > and Dependent Extensional Records Proposal (DERP) already isomorphic?
>> > My suspicion is that HERP is isomorphic, but DERP is not. However, I
>> > can only get away with my proof using Scott-free semantics. 2) Which
>> > trac should I post this too? Given how well understood homotopy type
>> > theory is, I'm tempted to bypass GHC entirely and propose this for
>> > haskell-prime. 3) What syntax should we use to represent homotopies?
>> > See extend discussion in Appendix C.
>> >
>> > HTH HAND,
>> > Gershom
>> >
>> > [1] To be precise, 100% of Haskell 2010 programs should, usually, be
>> > able to be rewritten to work with this proposal with a minimal set of
>> > changes[1a].
>> >
>> > [1a] A minimal set of changes is defined as the smallest set of
>> > changes necessary to make to a Haskell 2010 program such that it works
>> > with this proposal. We can arrive at these changes by the following
>> > procedure: 1) Pick a change[1b]. 2) Is it minimal? If so keep it. 3)
>> > are we done? If not, make another change.
>> >
>> > [1b] To do this constructively, we need an order. I suggest the lo
>> > mein, since noodles give rise to a free soda.
>> >
>> > [2] I haven't looked at the source, but I would suggest putting it in
>> > the file Axioms.hs.
>> >
>> > [3] http://homotopytypetheory.org/
>> >
>> > [4] http://arxiv.org/
>> >
>> >
>> > *Appendix A: A Natural Proof of the Soundness of HERP
>> >
>> > Take the category of all types in HERP, with functions as morphisms.
>> > Call it C. Take the category of all sound expressions in HERP, with
>> > functions as morphisms. Call it D. Define a full functor from C to D.
>> > Call it F. D

Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Christopher Done
On 1 April 2012 00:23, Evan Laforge  wrote:
> Two of them use haskell-src which means they can't parse my code.  Two
> more use haskell-src-exts, which is slow and fragile, breaks on
> partially edited source, and doesn't understand hsc.

For what it's worth:

* As you say below, HSC is easily dealth with by ignoring # lines.

* haskell-src-exts is not slow. It can parse a 769 module codebase racking up
  to 100k lines of code in just over a second on my machine. That's
  good. Also, I don't think speed of the individual file matters, for
  reasons I state below.

* Broken source is not a big issue to me. Code is written with a GHCi session
  on-hand; syntactic issues are the least of my worries. I realise it
  will be for others.

The problem with haskell-src-exts is that it refuses to parse expressions for
which it cannot reduce the operator precedence, meaning it can't parse any
module that uses a freshly defined operator.

The reason I don't think individual file performance matters is that
the output can be cached. There's also the fact that if I modify a
file, and generate tags, I'm likely editing that file presently, and
I'm not likely to need jumping around which tags provides.

> Then there's the venerable hasktags, but it's buggy and the source
> is a mess. I fixed a bug where it doesn't actually strip comments
> so it makes tags to things inside comments, but then decided it
> would be easier to just write my own.

Hasktags is hardly buggy in my experience. The comments bug is minor. But I
agree that the codebase is messy and would be better handled as
Text. But again, speed on the individual basis isn't a massive issue here.

> fast-tags is fast because it has a parser that's just smart enough to
> pick out the tags.  It can tagify my entire 300 module program in
> about a second.

Unfortunately there appears to be a horrific problem with it, as the
log below shows:

$ time (find . -name '*.hs' | xargs hasktags -e)

real0m1.573s
user0m1.536s
sys 0m0.032s
$ cabal install fast-tags --reinstall --ghc-options=-O2
Resolving dependencies...
Configuring fast-tags-0.0.2...
Preprocessing executables for fast-tags-0.0.2...
Building fast-tags-0.0.2...
[1 of 1] Compiling Main ( src/Main.hs,
dist/build/fast-tags/fast-tags-tmp/Main.o )
Linking dist/build/fast-tags/fast-tags ...
Installing executable(s) in /home/chris/.cabal/bin
$ time (find . -name '*.hs' | xargs fast-tags)
^C
real10m39.184s
user0m0.016s
sys 0m0.016s
$

I cancelled the program after ten minutes. The CPU was at 100% and
memory usage was slowly climbing, but only slowly. It's not an
infinite loop, however. If I delete the "tags" file and restrict the
search to only the src directory, it completes earlier, but gets slower.

$ time (find src -name '*.hs' | xargs hasktags -e)

real0m0.113s
user0m0.112s
sys 0m0.008s
$ time (find src -name '*.hs' | xargs fast-tags)

real0m0.136s
user0m0.120s
sys 0m0.020s
$ time (find src -name '*.hs' | xargs fast-tags)

real0m0.250s
user0m0.244s
sys 0m0.012s

So there appears to be an exponential component to the program. E.g.

$ time (find . -name '*.hs' | xargs fast-tags)
./lib/text-0.11.1.5/tests/benchmarks/src/Data/Text/Benchmarks/Pure.hs:435:
unexpected end of block after data * =
./lib/split-0.1.2.3/Data/List/Split/Internals.hs:68: unexpected end of
block after data * =
./lib/QuickCheck-2.4.1.1/Test/QuickCheck/Function.hs:51: unexpected
end of block after data * =

real0m26.993s
user0m26.590s
sys 0m0.324s

If I try to run again it hangs again. I expect it's somewhere around
sort/merge/removeDups. This is on GHC 7.2.1.

> But it's also incremental, so it only needs to do that the first
> time.

For what it's worth to anybody using hasktags, I've added this to
hasktags: https://github.com/chrisdone/hasktags/commits/master

I save the file data as JSON. I tried using aeson but that's buggy:
https://github.com/bos/aeson/issues/75 At any rate, it should cache
the generated tags rather than the file data, but I'd have to
restructure the hasktags program a bit and I didn't feel like that
yet.

hasktags has no problem with this codebase:

$ time (find . -name '*.hs' | xargs hasktags --cache)

real0m1.512s
user0m1.420s
sys 0m0.088s

and with the cache generated, it's half the time:

$ time (find . -name '*.hs' | xargs hasktags --cache)

real0m0.780s
user0m0.712s
sys 0m0.072s

> I have vim's BufWrite autocommand bound to
> updating the tags every time a file is written, and it's fast enough
> that I've never noticed the delay.  It understands hsc directly
> (that's trivial, just ignore the # lines) so there's no need to run
> hsc2hs before tagifying.  The result is tags which are automatically
> up to date all the time, which is nice.

This is the use-case I (and the users who have notified me of it) have
with Emacs in haskell-mode.

> If people care about lhs and emacs tags then it wouldn't be hard to
> support 

Re: [Haskell-cafe] A Modest Records Proposal

2012-04-01 Thread Gregory Collins
Whoosh? :-)

On Sun, Apr 1, 2012 at 3:54 PM, Greg Weber  wrote:

> Hi Gershom,
>
> This sounds very interesting even if I have no idea what you are
> talking about :)
> Please create a proposal linked from this page:
> http://hackage.haskell.org/trac/ghc/wiki/Records
> The first thing you should probably do is explain the programmer's
> point of view. That ensures that we are all going through the
> requirements phase correctly.
> I can assure you that haskell prime would not accept a records change
> until it is first implemented in GHC or another Haskell compiler.
>
> Thanks,
> Greg Weber
>
> On Sat, Mar 31, 2012 at 11:14 PM, Gershom B  wrote:
> > The records discussion has been really complicated and confusing. But
> > I have a suggestion that should provide a great deal of power to
> > records, while being mostly[1] backwards-compatible with Haskell 2010.
> > Consider this example:
> >
> >data A a = A{a:a, aa::a, aaa :: a -> A (a -> a)}
> >data B a = B{aaa :: a -> A (a -> a), a :: A}
> >
> > Now what is the type of this?
> >
> > a a aa = a{a = a, aaa = aa}
> >
> > Using standard Haskell typeclasses this is a difficult question to
> > answer. The types of  for A and B do not unify in an obvious way.
> > However, while they are intensionally quite distinct, they unify
> > trivially extensionally. The obvious thing to do is then to extend the
> > type system with extensional equality on record functions.
> >
> > Back when Haskell was invented, extensional equality was thought to be
> > hard. But purity was thought to be hard too, and so were Monads. Now,
> > we know that function existentionality is easy. In fact, if we add the
> > Univalence Axiom to GHC[2], then this is enough to get function
> > existensionality. This is a well-known result of Homotopy Type
> > Theory[3], which is a well-explored approach that has existed for at
> > least a few years and produced more than one paper[4]. Homotopy Type
> > Theory is so sound and well understood that it has even been
> > formalized in Coq.
> >
> > Once we extend GHC with homotopies, it turns out that records reduce
> > to mere syntactic sugar, and there is a natural proof of their
> > soundness (Appendix A). Furthermore, there is a canonical projection
> > for any group of fields (Appendix B). Even better, we can make "."
> > into the identity path operator, unifying its uses in composition and
> > projection. In fact, with extended (parenthesis-free) section rules,
> > "." can also be used to terminate expressions, making Haskell friendly
> > not only to programmers coming from Java, but also to those coming
> > from Prolog!
> >
> > After some initial feedback, I'm going to create a page for the
> > Homotopy Extensional Records Proposal (HERP) on trac. There are really
> > only a few remaining questions. 1) Having introduced homotopies, why
> > not go all the way and introduce dependent records? In fact, are HERP
> > and Dependent Extensional Records Proposal (DERP) already isomorphic?
> > My suspicion is that HERP is isomorphic, but DERP is not. However, I
> > can only get away with my proof using Scott-free semantics. 2) Which
> > trac should I post this too? Given how well understood homotopy type
> > theory is, I'm tempted to bypass GHC entirely and propose this for
> > haskell-prime. 3) What syntax should we use to represent homotopies?
> > See extend discussion in Appendix C.
> >
> > HTH HAND,
> > Gershom
> >
> > [1] To be precise, 100% of Haskell 2010 programs should, usually, be
> > able to be rewritten to work with this proposal with a minimal set of
> > changes[1a].
> >
> > [1a] A minimal set of changes is defined as the smallest set of
> > changes necessary to make to a Haskell 2010 program such that it works
> > with this proposal. We can arrive at these changes by the following
> > procedure: 1) Pick a change[1b]. 2) Is it minimal? If so keep it. 3)
> > are we done? If not, make another change.
> >
> > [1b] To do this constructively, we need an order. I suggest the lo
> > mein, since noodles give rise to a free soda.
> >
> > [2] I haven't looked at the source, but I would suggest putting it in
> > the file Axioms.hs.
> >
> > [3] http://homotopytypetheory.org/
> >
> > [4] http://arxiv.org/
> >
> >
> > *Appendix A: A Natural Proof of the Soundness of HERP
> >
> > Take the category of all types in HERP, with functions as morphisms.
> > Call it C. Take the category of all sound expressions in HERP, with
> > functions as morphisms. Call it D. Define a full functor from C to D.
> > Call it F. Define a faithful functor on C and D. Call it G. Draw the
> > following diagram.
> >
> > F(X)F(Y)
> > |  |
> > |  |
> > |  |
> > G(X)G(Y)
> >
> > Define the arrows such that everything commutes.
> >
> >
> > *Appendix B: Construction of a Canonical Projection for Any Group of
> Fields.
> >
> > 1) Take the fields along the homotopy to an n-ball.
> > 2) Pack them loosely with newspaper and gunpowder.
> 

Re: [Haskell-cafe] A Modest Records Proposal

2012-04-01 Thread Greg Weber
Hi Gershom,

This sounds very interesting even if I have no idea what you are
talking about :)
Please create a proposal linked from this page:
http://hackage.haskell.org/trac/ghc/wiki/Records
The first thing you should probably do is explain the programmer's
point of view. That ensures that we are all going through the
requirements phase correctly.
I can assure you that haskell prime would not accept a records change
until it is first implemented in GHC or another Haskell compiler.

Thanks,
Greg Weber

On Sat, Mar 31, 2012 at 11:14 PM, Gershom B  wrote:
> The records discussion has been really complicated and confusing. But
> I have a suggestion that should provide a great deal of power to
> records, while being mostly[1] backwards-compatible with Haskell 2010.
> Consider this example:
>
>    data A a = A{a:a, aa::a, aaa :: a -> A (a -> a)}
>    data B a = B{aaa :: a -> A (a -> a), a :: A}
>
> Now what is the type of this?
>
>     a a aa = a{a = a, aaa = aa}
>
> Using standard Haskell typeclasses this is a difficult question to
> answer. The types of  for A and B do not unify in an obvious way.
> However, while they are intensionally quite distinct, they unify
> trivially extensionally. The obvious thing to do is then to extend the
> type system with extensional equality on record functions.
>
> Back when Haskell was invented, extensional equality was thought to be
> hard. But purity was thought to be hard too, and so were Monads. Now,
> we know that function existentionality is easy. In fact, if we add the
> Univalence Axiom to GHC[2], then this is enough to get function
> existensionality. This is a well-known result of Homotopy Type
> Theory[3], which is a well-explored approach that has existed for at
> least a few years and produced more than one paper[4]. Homotopy Type
> Theory is so sound and well understood that it has even been
> formalized in Coq.
>
> Once we extend GHC with homotopies, it turns out that records reduce
> to mere syntactic sugar, and there is a natural proof of their
> soundness (Appendix A). Furthermore, there is a canonical projection
> for any group of fields (Appendix B). Even better, we can make "."
> into the identity path operator, unifying its uses in composition and
> projection. In fact, with extended (parenthesis-free) section rules,
> "." can also be used to terminate expressions, making Haskell friendly
> not only to programmers coming from Java, but also to those coming
> from Prolog!
>
> After some initial feedback, I'm going to create a page for the
> Homotopy Extensional Records Proposal (HERP) on trac. There are really
> only a few remaining questions. 1) Having introduced homotopies, why
> not go all the way and introduce dependent records? In fact, are HERP
> and Dependent Extensional Records Proposal (DERP) already isomorphic?
> My suspicion is that HERP is isomorphic, but DERP is not. However, I
> can only get away with my proof using Scott-free semantics. 2) Which
> trac should I post this too? Given how well understood homotopy type
> theory is, I'm tempted to bypass GHC entirely and propose this for
> haskell-prime. 3) What syntax should we use to represent homotopies?
> See extend discussion in Appendix C.
>
> HTH HAND,
> Gershom
>
> [1] To be precise, 100% of Haskell 2010 programs should, usually, be
> able to be rewritten to work with this proposal with a minimal set of
> changes[1a].
>
> [1a] A minimal set of changes is defined as the smallest set of
> changes necessary to make to a Haskell 2010 program such that it works
> with this proposal. We can arrive at these changes by the following
> procedure: 1) Pick a change[1b]. 2) Is it minimal? If so keep it. 3)
> are we done? If not, make another change.
>
> [1b] To do this constructively, we need an order. I suggest the lo
> mein, since noodles give rise to a free soda.
>
> [2] I haven't looked at the source, but I would suggest putting it in
> the file Axioms.hs.
>
> [3] http://homotopytypetheory.org/
>
> [4] http://arxiv.org/
>
>
> *Appendix A: A Natural Proof of the Soundness of HERP
>
> Take the category of all types in HERP, with functions as morphisms.
> Call it C. Take the category of all sound expressions in HERP, with
> functions as morphisms. Call it D. Define a full functor from C to D.
> Call it F. Define a faithful functor on C and D. Call it G. Draw the
> following diagram.
>
> F(X)F(Y)
> |          |
> |          |
> |          |
> G(X)G(Y)
>
> Define the arrows such that everything commutes.
>
>
> *Appendix B: Construction of a Canonical Projection for Any Group of Fields.
>
> 1) Take the fields along the homotopy to an n-ball.
> 2) Pack them loosely with newspaper and gunpowder.
> 3) Project them from a cannon.
>
> In an intuitionistic logic, the following simplification is possible:
>
> 1) Use your intuition.
>
>
> *Appendix C: Homotopy Syntax
>
> Given that we already are using the full unicode set, what syntax
> should we use to distinguish paths and homotopi

Re: [Haskell-cafe] hint and type synonyms

2012-04-01 Thread Daniel Gorín
Hi

I think I see now what the problem you observe is. It is not related with type 
synonyms but with module scoping. Let me briefly discuss what hint is doing 
behind the scenes and why, this may give a better understanding of what kind of 
things will and will not work.

While hint is directly tied to ghc, it should be possible to implement 
something similar for any self-hosting Haskell compiler. Essentially, you need 
the compiler to provide a function compileExpr that given a string with a 
Haskell expression, returns a value of some type, say CompiledExpr (or an error 
if the string is not a valid expression, etc). So, for instance, 'compileExpr 
"not True"' will produce something of type CompiledExpr, but we know that it is 
safe to unsafeCoerce this value into one of type Bool.

Now, what happens if one unsafeCoerces to a Bool the result of running 
compileExpr on "[True]"? This is, of course, equivalent to running 
'(unsafeCoerce [True]) :: Bool' and sounds dangerous. Indeed, if your compiler 
were to keep type information in its CompiledExprs and check for type 
correctness on each operation (akin to what the interpreters for dynamic 
languages (like Perl, Ruby, etc.) do) then you may get a gracious runtime 
error; but most (if not all) of Haskell compilers eliminate all type 
information from the compiled representation (which is a good thing for 
performance), so the result of a bad cast like the one above will surely result 
in an ugly (uninformative) crash.

So how does we deal with this in hint? When you write 'interpret "not True" (as 
:: Bool)' we want a runtime guarantee that "not True" is in fact a value of 
type Bool. We do this by calling compileExpr with "(not True) :: Bool" instead 
of just with "not True". This way, an incorrect cast is caught at runtime by 
compileExpr (e.g. "([True]) :: Bool" will fail to compile). In order to do 
this, the type parameter must be an instance of Data.Typeable and we use the 
typeOf function to obtain the type (e.g. show $ Data.Typeable.typeOf "True" == 
"Bool")

This is, as you've noticed, a little fragile. For this to work, the type 
expression returned by Data.Typeable.typeOf must correspond to something that 
is visible to the complieExpr function. You do this in hint adding the relevant 
modules with the setImports function. It may be a little inconvenient, but I 
think it is unavoidable.

I wouldn't ever recommend writing bogus instances of Typeable as in your 
original example. If you find a situation where this looks as the more sensible 
thing to do I'd like to know! Also, in the example from Rc43 you cite below, 
instead of running setImport on HReal.Core.Prelude you need to run setImport on 
all the modules that are exported by HReal.Core.Prelude (this can be abstracted 
in a function, I guess).

Since I am on this, I'd like to point out that this solution is, sadly, not 
100% safe. There is still one way in which things can go wrong and people often 
trip over this. The problem roughly comes when your program defines a type T on 
module M and ends up running compileExpr on an expression of type M.T but in a 
way such that module M gets to be compiled from scratch. When this happens, the 
type M.T on your program and the type M.T used in compileExpr may end up having 
two incompatible representations and the unsafeCoerce will lead to a crash. 
This typically happens when using hint to implement some form of plugin system. 
Imagine you have a project organized as follows:

project/
project/src/M.hs
project/src/main.hs
project/plugins/P.hs
dist/build/M.o
dist/build/main.o
dist/build/main

where M.hs defines T;  P.hs imports M and exports a function f :: T; and 
main.hs imports M and runs an interpreter that sets "src" as the searchPat, 
loads "plugins/P.hs", interprets "f" as a T and does something with it. Assume 
dist/build/main is run from the project dir. When hint tries to load 
"plugins/P.hs" the "import M" will force the compiler to search for module M.hs 
in project/src and compile it again (just like ghci would do). This can be bad! 
The robust solution in this case is to put all the definitions that you want to 
be shared by your program and your dynamically loaded code in a library (and 
make sure that it is installed before running the program).

Hope this helps...

Daniel




On Mar 31, 2012, at 8:06 PM, Claude Heiland-Allen wrote:

> Hi Daniel, cafe,
> 
> On 31/03/12 17:47, Daniel Gorín wrote:
>> Could you provide a short example of the code you'd like to write but gives 
>> you problems? I'm not able to infer it from your workaround alone...
> 
> This problem originally came up on #haskell, where Rc43 had a problem making 
> a library with a common module that re-exports several other modules:
> 
> http://hpaste.org/66281
> 
> My personal interest is somewhat secondary, having not yet used hint in a 
> real project, but code I would like to write at some point in the future is 
> much like the 'failure' below, unrolled it loo

Re: [Haskell-cafe] ANNOUNCE: fast-tags-0.0.1

2012-04-01 Thread Roman Cheplyaka
* Evan Laforge  [2012-03-31 15:23:48-0700]
> A while back I was complaining about the profusion of poorly
> documented tags generators.  Well, there is still a profusion of
> poorly documented tags generators... I was able to find 5 of them.
> 
> So, that said, here's my contribution to the problem: fast-tags,
> haskell tag generator #6.

It's useful to mention the limitations of this package, so that people
know what to expect and don't spend their time testing it to understand
that it doesn't suit their needs.

For example:
  doesn't generate tags for definitions without type signatures
  doesn't understand common extensions, such as type families

-- 
Roman I. Cheplyaka :: http://ro-che.info/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is this a correct explanation of FRP?

2012-04-01 Thread Peter Minten
On Fri, 2012-03-30 at 02:30 +0200, Ertugrul Söylemez wrote:
> Peter Minten  wrote:
> 
> > I've been trying to get my head around Functional Reactive Programming
> > by writing a basic explanation of it, following the logic that
> > explaining something is the best way to understand it.
> >
> > Am I on the right track with this explanation?
> 
> You are explaining a particular instance of FRP.  Functional reactive
> programming is not a single concept, but a whole family of them.
> Traditional FRP as implemented by reactive-banana (and older libraries
> like Elerea, Fran and Reactive) is based on behaviors and events.  It
> uses the notion of a time-dependent value in a direct fashion.
> Conceptionally traditional FRP is this:
> 
> Behavior a = Time -> a
> Event a= [(Time, a)]
> 
> -- The current time at even seconds and half the current time at odd
> -- seconds:
> 
> alterTime = fullTime
> fullTime = switch (after 1) currentTime halfTime
> halfTime = switch (after 1) (fmap (/ 2) currentTime) fullTime
> 
> There is a second instance of FRP though called AFRP.  The A stands for
> "arrowized", but in modern times I prefer to think of it as
> "applicative".  The underlying control structure is now a category and
> the concept of a time-varying value is changed to a time-varying
> function (called signal function (SF)), which is just an automaton and
> there is an arrow for it.  This simplifies implementation, makes code
> more flexible and performance more predictable.  The libraries Animas
> and Yampa implement this concept (Animas is a fork of Yampa).
> Conceptionally:
> 
> SF a b= a -> (b, SF a b)
> Event a b = SF a (Maybe b)
> 
> alterTime = fullTime
> fullTime = switch (after 1) currentTime halfTime
> halfTime = switch (after 1) ((/ 2) ^<< currentTime) fullTime

Sorry, I don't understand this. Would it be correct to say that AFRP
shares the basic ideas of FRP in that it has behaviors and
events/signals and that the main difference comes from the way AFRP is
implemented?

As I see FRP it has three components: the basic concepts, the underlying
theory and the way the libraries actually work.

As far as I understand FRP (which is not very far at all) the basic
concepts can, simplified, be formulated as:

* There are things which have a different value depending on when you
look at them. (behaviors)
* It is possible to express that something has occured at a certain
point in time. (events/signals)
* Behaviors can change in response to events/signals.
* A behavior's value may be different on different points in time even
if no event has come in.

"Normal" FRP theory expresses behaviors as "Time -> a" and events as
"[(Time,a)]". AFRP uses some kind of "signal function" to express
behaviors, or behaviors are signal functions and those functions
interact with events. Anyway AFRP uses a completely different
theoretical way of thinking about events and behaviors.

The reactive-banana library uses some internal representation which
exposes an API using applicative functors. The theory behind it, as
shown in the haddock comments, is "Normal" FRP.

The reactive library uses monads and not just applicative functors. It
uses the "Normal" FRP style.

Yampa/Animas use arrows and have a different underpinning in math.
However the basic concepts of FRP are shared with all the other
libraries.

Netwire also uses AFRP but extends the theory with something called
signal inhibition. Like everything else it shares the basic concepts of
FRP.

FRP concepts -> FRP-> reactive
   -> reactive-banana
 -> AFRP   -> Yampa
   -> Animas
 -> wired AFRP -> Netwire

Is this a correct way to summarize the differences?

Greetings,

Peter Minten


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is this a correct explanation of FRP?

2012-04-01 Thread Michael Snoyman
On Sat, Mar 31, 2012 at 7:15 PM, Peter Minten  wrote:
> On Fri, 2012-03-30 at 09:15 +0300, Michael Snoyman wrote:
>
>> First you state that we shouldn't use `union` for the `ePitch` Event,
>> and then you used it for `bOctave`. Would it be more efficient to
>> implement bOctave as someting like:
>>
>>     eOctave :: Event t (Int -> Int)
>>     eOctave =
>>         filterJust toStep <$> eKey
>>       where
>>         toStep '+' = Just (+ 1)
>>         toStep '-' = Just (subtract 1)
>>         toStep _ = Nothing
>>
>>     bOctave :: Behavior t Octave
>>     bOctave = accumB 0 eOctave
>
> Yes. Though it's slightly less bad, the case with ePitch was something
> like 6 appends. It was mostly a case of badly copying the style from the
> examples and not realizing the examples use event streams from different
> outside sources. I've adapted the example to use something similar to
> your eOctave.
>
>> Also, I'm left wondering: how would you create a new event stream in
>> the first place? You're telling us to just rely on `eKey`, which is
>> fair, but a great follow-up would demonstrate building it. Looking
>> through the docs I found `newEvent`, but I'm not quite certain how I
>> would combine it all together.
>
> The updated document, which now lives at
> http://www.haskell.org/haskellwiki/FRP_explanation_using_reactive-banana
> contains a "Making the example runnable" section which shows how connect
> the example with the outside world.
>
> The short version, regarding the creation of new events, is that you
> have to do it in two parts. You need newAddHandler in the IO monad to
> get a (a -> IO ()) function that fires the event as well as something
> called an AddHandler and fromAddHandler in the NetworkDescription monad
> to get an event from that AddHandler. It's not possible to get values
> out of the NetworkDescription monad (without IORef tricks) and events
> can only be created within a NetworkDescription monad.
>
> The newEvent function looks like what you'd want, but because you can't
> get the event firing function out of NetworkDescription its use is
> limited.
>
> Greetings,
>
> Peter Minten
>

This looks great, thanks.

Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is this a correct explanation of FRP?

2012-04-01 Thread Heinrich Apfelmus

Peter Minten wrote:


The updated document, which now lives at
http://www.haskell.org/haskellwiki/FRP_explanation_using_reactive-banana
contains a "Making the example runnable" section which shows how connect
the example with the outside world.


I have added a link from the reactive-banana project homepage. Thanks 
for your great explanation!



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe