Re: [GHC] #2189: hSetBuffering stdin NoBuffering doesn't work on Windows

2009-03-06 Thread GHC
#2189: hSetBuffering stdin NoBuffering doesn't work on Windows
---+
Reporter:  FalconNL|Owner:  igloo  
Type:  merge   |   Status:  new
Priority:  high|Milestone:  6.10.2 
   Component:  libraries/base  |  Version:  6.8.2  
Severity:  normal  |   Resolution: 
Keywords:  hsetbuffering buffering buffer  |   Difficulty:  Unknown
Testcase:  |   Os:  Windows
Architecture:  x86 |  
---+
Changes (by simonmar):

  * owner:  simonmar = igloo
  * type:  bug = merge

Comment:

 Patch applied:

 {{{
 Thu Mar  5 03:33:23 PST 2009  Simon Marlow marlo...@gmail.com
   * FIX #2189: re-enabled cooked mode for Console-connected Handles on
 Windows
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2189#comment:22
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #3076: Make genericLength tail-recursive so it doesn't overflow stack

2009-03-06 Thread GHC
#3076: Make genericLength tail-recursive so it doesn't overflow stack
-+--
Reporter:  Syzygies  |  Owner:  
Type:  run-time performance bug  | Status:  new 
Priority:  normal|  Component:  Compiler
 Version:  6.10.1|   Severity:  normal  
Keywords:|   Testcase:  
  Os:  Unknown/Multiple  |   Architecture:  Unknown/Multiple
-+--
 A likely use for genericLength is to count lists of more than Int
 elements. However, the source code is not tail-recursive, making a poor
 consumer, so genericLength easily overflows stack.

 Here is the source code from 6.10.1:

 {{{
 genericLength   :: (Num i) = [b] - i
 genericLength []=  0
 genericLength (_:l) =  1 + genericLength l
 }}}

 Here is a proposed alternative:

 {{{
 genericLength ∷ (Num i) ⇒ [b] → i
 genericLength = len 0 where
   len n [] = n
   len n (_:xt) = len (n+1) xt
 }}}

 In my test application (enumerating the 66,960,965,307 atomic lattices on
 six atoms) this alternative avoids overflowing the stack.

 [This is not the same issue as
 http://hackage.haskell.org/trac/ghc/ticket/2962]

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3076
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #3075: validation requires a tool that is not included in utils

2009-03-06 Thread GHC
#3075: validation requires a tool that is not included in utils
-+--
Reporter:  nr|Owner:  igloo  
Type:  bug   |   Status:  new
Priority:  normal|Milestone: 
   Component:  Build System  |  Version:  6.11   
Severity:  major |   Resolution: 
Keywords:|   Difficulty:  Unknown
Testcase:|   Os:  Linux  
Architecture:  x86   |  
-+--
Changes (by igloo):

  * owner:  = igloo
  * difficulty:  = Unknown

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3075#comment:1
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #3077: 'make' fails in utils (6.11)

2009-03-06 Thread GHC
#3077: 'make' fails in utils (6.11)
---+
Reporter:  nr  |  Owner:  
Type:  bug | Status:  new 
Priority:  normal  |  Component:  Build System
 Version:  6.11|   Severity:  normal  
Keywords:  |   Testcase:  
  Os:  Linux   |   Architecture:  x86 
---+
 In order to get 'make boot' to work in compiler, I tried running 'make' in
 utils.
 Something's off with configuration it appears.  I'm attaching full output.

 Sorry to keep submitting these things with no patches.  I don't have a
 strong enough picture of what's ''supposed'' to happen during build to be
 confident of making things better not worse.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3077
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #2189: hSetBuffering stdin NoBuffering doesn't work on Windows

2009-03-06 Thread GHC
#2189: hSetBuffering stdin NoBuffering doesn't work on Windows
---+
Reporter:  FalconNL|Owner:  igloo  
Type:  merge   |   Status:  closed 
Priority:  high|Milestone:  6.10.2 
   Component:  libraries/base  |  Version:  6.8.2  
Severity:  normal  |   Resolution:  fixed  
Keywords:  hsetbuffering buffering buffer  |   Difficulty:  Unknown
Testcase:  |   Os:  Windows
Architecture:  x86 |  
---+
Changes (by igloo):

  * status:  new = closed
  * resolution:  = fixed

Comment:

 Merged

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/2189#comment:23
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #3078: Erroneous warnings for -XPatternGuards

2009-03-06 Thread GHC
#3078: Erroneous warnings for -XPatternGuards
---+
Reporter:  guest   |  Owner:
Type:  bug | Status:  new   
Priority:  normal  |  Component:  Compiler  
 Version:  6.10.1  |   Severity:  normal
Keywords:  |   Testcase:
  Os:  Linux   |   Architecture:  x86_64 (amd64)
---+
 As nonstandard PatternGuards I get a meaningless warning:
 {{{
 walltest.hs:8:10:
 Warning: Pattern match(es) are non-exhaustive
  In the definition of `n': Patterns not matched:
 Ok, modules loaded: Main.
 }}}


 {{{
 {-# LANGUAGE PatternGuards #-}
 {-# OPTIONS -Wall #-}

 data T = A Int | B Int

 funny :: T - Int
 funny t = n
 where n | A x - t = x
 | B x - t = x
 }}}

 But none of that happens when using a case expression for the same thing:
 {{{
 {-# OPTIONS -Wall #-}

 data T = A Int | B Int

 funny :: T - Int
 funny t = n
 where n = case t of
 (A x) - x
 (B x) - x
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/3078
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: Loop unrolling + fusion ?

2009-03-06 Thread Simon Marlow

Claus Reinke wrote:

its loop unroller on this guy haven't succeeded. -funroll-loops and
-funroll-all-loops doesn't  touch it,


That's because the C produced by GHC doesn't look like a loop to GCC.  
This can be fixed but given that we are moving away from -fvia-C  
anyway, it probably isn't worth doing.


That was one of my questions in the optimization and rewrite rules
thread: shouldn't -fvia-C be supported (as a non-default option)
for at least as long as the alternative isn't a clear win in all cases?

If there are small changes that could make GHC-generated code
more palatable to GCC's optimizer, wouldn't that be worth doing?
Once -fvia-C is allowed to bitrot to the level of unoptimized
bootstraps only, we might never get the good performance route
back, so why not keep it in good shape as long as it offers real benefits?


The trouble with supporting multiple backends is that the cost in terms of 
testing and maintenance is high.  And the registerised -fvia-C backend is 
particularly nasty, coming as it does with thousands of lines of Perl 4 
that regularly get broken by new versions of gcc.


The registerised via-C backend should have been retired long ago.  It's 
time to take it round back and shoot it.  We should spend our efforts on 
finding a good long-term solution rather than patching this dead-end, IMHO.


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Warning about unrecognised pragma

2009-03-06 Thread Ian Lynagh

Hi Colin,

On Thu, Feb 05, 2009 at 11:27:05AM +, Colin Paul Adams wrote:
 I am getting messages:
 
 Board_representation.hs:407:0: Unrecognised pragma
 
 Board_representation.hs:442:0: Unrecognised pragma
 
 Board_representation.hs:458:0: Unrecognised pragma
 
 I looked into the documentation to see how to supress these warnings
 (prefereably for the particular pragma concerned), but I found (from 8.13):
 
 any pragma encountered with an unrecognised word is (silently) ignored.

I've fixed the documentation now, but can I ask what pragmas these are?
If they're something common then we should teach GHC to ignore them.


Thanks
Ian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-06 Thread Claus Reinke

That was one of my questions in the optimization and rewrite rules
thread: shouldn't -fvia-C be supported (as a non-default option)
for at least as long as the alternative isn't a clear win in all cases?


The trouble with supporting multiple backends is that the cost in terms of 
testing and maintenance is high.  And the registerised -fvia-C backend is 
particularly nasty, coming as it does with thousands of lines of Perl 4 
that regularly get broken by new versions of gcc.


Yes, I can understand that you'd like to leave that part behind sometime
before yesterday:-) I assume that this very complexity means that the
-fvia-C route doesn't really get all the way to its most interesting
promises (easy portability, and full backend optimizations inherited
from gcc). And with that in mind, I can also understand that you don't 
want to put in any further work into trying to improve it, if that distracts 
from a better long-term solution. 


What I don't understand yet is the routemap for replacing -fvia-C. We've
seen -fvia-C being demoted from default to backup (fine by me), we've
seen a feature supported only by -fvia-C removed completely, instead 
of seeing support for it added to the -fasm route (macro-based APIs

used to work with ffi, would now require a wrapper generator, which
doesn't exist yet). 

Indications are that -fvia-C still tends to produce better code (even 
though it is not the best that ghc+gcc could produce) than -fasm (is that 
any better for the new backend?). And last, but not least, ghc has more 
limited resources than gcc, so how is ghc going to beat gcc at the 
portability and backend optimizations game while still making progress
in its core competencies (ie, higher-level improvements; there's also the 
interesting side-issue of how the two stages of optimizations are going to 
interact in ghc, if there is a barrier that can only be crossed in one direction)?


The registerised via-C backend should have been retired long ago.  It's 
time to take it round back and shoot it.  We should spend our efforts on 
finding a good long-term solution rather than patching this dead-end, IMHO.


No disagreement there (apart from the violent metaphor). I'm just worried 
about pragmatics, ie scuttling the ship before we've counted our life boats!-) 
And I suspect that for ghc trying to do everything itself on all platforms 
(rather than trying for very good -fasm on some platforms of interest, and 
good -fvia-C as a fallback everywhere else) is going to be anything but 
more work than patching that dead-end (though no doubt more interesting).


In other words, what is the plan wrt to backends, especially wrt 
recovering the optimizations and portability issues previously left to 
gcc? When will the fast via-C route be retired, what quality of

replacement will be in place at that time, how long to catch up
to where we are now, how to keep up, etc.?

Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-06 Thread Claus Reinke

The implementation I'm thinking of is basically trivial. You just add
the information gathered from the pragmas onto the Ids, then have a
dedicated core pass that looks at the pragmas and does it's
worker/wrapper thing. The technology to do peeling/unrolling is
trivial and there already examples in the codebase (in case liberation
and SAT). If someone can spec out what they actually want and GHC HQ
give it the thumbs up I would be happy to do the grunt work on
implementing this feature.


Yes, please!-)

My preferred spec would be roughly

{-# NOINLINE f #-}
   as now

{-# INLINE f #-} 
   works as now, which is for non-recursive f only (might in future

   be taken as go-ahead for analysis-based recursion unfolding)

{-# INLINE f PEEL n #-}
   inline calls *into* recursive f (called loop peeling for loops)

{-# INLINE f UNROLL m #-}
   inline recursive calls to f *inside* f (called loop unrolling for loops)

{-# INLINE f PEEL n UNROLL m #-}
   combine the previous two

   The numeric parameters are to be interpreted as if each call to
   f was annotated with both PEEL and UNROLL limits, to be
   decreased as appropriate for every PEEL or UNROLL action.
   Peeling and unrolling stop when the respective count annotation
   has reached 0. Note that mutual recursion is the domain of PEEL,
   while UNROLL only applies to direct recursion.

   {-# INLINE f PEEL n #-}, for n0, corresponds to worker/
   wrapper transforms (previously done manually) + inline wrapper,
   and should therefore also be taken as a hint for the compiler to 
   try the static argument transformation for f (the worker).


   Non-supporting implementations should treat these as INLINE
   pragmas (same warning/ignore or automatic unfold behaviour).

About the pragma name: as far as I can tell, Hugs simply ignores
INLINE pragmas, no matter what they say, other implementations
could just ignore the PEEL/UNROLL part (possibly with a warning)
- do any of them support INLINE on recursive definitions?

The only problem is that GHC itself fails with a parse error, which
would lead to version issues (perhaps GHC should have allowed
for additional information to otherwise syntactically complete pragmas,
or warnings instead of errors, but that hitch is out in the wild now).

Having separate PEEL/UNROLL pragmas would make ignoring
the default action, but would clutter the pragma name space as well
as the source code; it also wouldn't make explicit that we are indeed 
refining the INLINE pragma for the case of recursive functions (which 
GHC currently ignores or complains about), by detailing how we want 
the recursive definition to be inlined.


Since we are talking about a refinement of the INLINE pragma, we
also need to look at that pragma's existing subtleties:-(

- no functions inlined into f: should be subject to override by
   INLINE pragmas (even for the non-recursive case?)
- no float-in/float-out/cse: ??
- no worker/wrapper transform in strictness analyser: we do get the 
   same effect from INLINE PEEL, so this should be okay, right?

- loop breakers: PEEL/UNROLL have their own limits, creating
   an intrinsic loop breaker when the counters run out

Is that sufficient?
Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Loop unrolling + fusion ?

2009-03-06 Thread Claus Reinke

My preferred spec would be roughly

{-# NOINLINE f #-}
   as now

{-# INLINE f #-} 
   works as now, which is for non-recursive f only (might in future

   be taken as go-ahead for analysis-based recursion unfolding)

{-# INLINE f PEEL n #-}
   inline calls *into* recursive f (called loop peeling for loops)

{-# INLINE f UNROLL m #-}
   inline recursive calls to f *inside* f (called loop unrolling for loops)

{-# INLINE f PEEL n UNROLL m #-}
   combine the previous two

   The numeric parameters are to be interpreted as if each call to
   f was annotated with both PEEL and UNROLL limits, to be
   decreased as appropriate for every PEEL or UNROLL action.


hmm, appropriate is one of those words that shouldn't occur in specs,
not even rough ones, so let's flesh this out a bit, by abstract example.

let f = ..f.. in f{n,m} -PEEL- let f = ..f.. in ..f{n-1,m}..

let f = ..f{n,m}.. in .. -UNROLL- let f = ..|..f{n,m-1}..|.. in ..

In words: the call being peeled/unrolled disappears, being replaced by
a copy of the definition, in which the decremented counts are applied to 
the calls of the same function created by unfolding. Is that specific enough?



   Peeling and unrolling stop when the respective count annotation
   has reached 0. Note that mutual recursion is the domain of PEEL,
   while UNROLL only applies to direct recursion.

   {-# INLINE f PEEL n #-}, for n0, corresponds to worker/
   wrapper transforms (previously done manually) + inline wrapper,
   and should therefore also be taken as a hint for the compiler to 
   try the static argument transformation for f (the worker).


   Non-supporting implementations should treat these as INLINE
   pragmas (same warning/ignore or automatic unfold behaviour).

Since we are talking about a refinement of the INLINE pragma, we
also need to look at that pragma's existing subtleties:-(

- no functions inlined into f: should be subject to override by
   INLINE pragmas (even for the non-recursive case?)
- no float-in/float-out/cse: ??
- no worker/wrapper transform in strictness analyser: we do get the 
   same effect from INLINE PEEL, so this should be okay, right?

- loop breakers: PEEL/UNROLL have their own limits, creating
   an intrinsic loop breaker when the counters run out


Loop breakers are still needed, in spite of the explicit limits. Consider

let {odd x = ..even{1,0}..; even x = ..odd{1,0}..} in odd{1,0} n

Peeling odd gives a call to even, peeling of which gives a fresh, not
decremented, call to odd! Unless one makes a copy of the whole
mutual recursion, with the odd calls adjusted. This might be easier 
to handle in your unfolding as a separate core2core pass scenario, 
where the pass might keep track of unfoldings already done (instead 
of trying to encode that information locally, in annotations).


Other issues?
Claus

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] Definitions of purity and Lazy IO

2009-03-06 Thread minh thu
2009/3/6  o...@okmij.org:

 As Amr Sabry aptly observed more than a decade ago discussions of
 purity and referential transparency usually lead to confusion and
 disagreement. His JFP paper provided the needed rigor and argued that
 Haskell even _with regular file (stream) IO_ is pure. As was shown
 yesterday, with Lazy IO, Haskell is not pure.

 Before we discuss definitions, let us note the motivations. Suppose I
 have a Haskell program. As every single (compiled) Haskell program it
 has the main function, of the type IO a for some a. It must use IO, at
 least to print the final result, the observation of the
 program. Suppose the program contains the expression e1 + e2 adding
 two integer expressions. Can I replace this expression with e2 + e1
 and be sure the observations of my program are unaffected?

 If one says that ``you should assume that every IO operation returns a
 random result''  then we have no ability to reason about any
 real Haskell program. We can't be sure of soundness of any compiler
 optimization. So, Haskell is far worse than C!? With C, Xavier Leroy
 has managed to create a provably correct optimizing compiler, and
 mechanically _prove_ the soundness of all optimizations. A C program,
 just like a Haskell program, must use IO at least to print the
 final result. We can never hope to build a provably correct compiler for
 Haskell? We cannot prove observational equivalences of any real
 Haskell program? Isn't that sad?

 The formal question is thus: are the following two expressions f1 and f2,
 of a *pure type* Int-Int-Int

 f1, f2:: Int - Int - Int
 f1 = \e1 e2 - case (e1,e2) of
(1,_) - e1 - e2
(_,_) - e1 - e2

 f2 = \e1 e2 - case (e1,e2) of
(_,1) - e1 - e2
(_,_) - e1 - e2

 equal? Before one invokes an equational theory or says that both these
 expressions are just integer subtraction, let me clarify the
 question: are f1 and f2 at least weakly observationally equivalent?
 That is, for any program context C[] such that C[f1] is well-typed,
 the program C[f2] must too be well-typed, and if one can observe the
 result of C[f1] and of C[f2], the two observations must be identical. The
 observation of a program may (and often does) involve side-effects and
 IO (more on it below). The message posted yesterday exhibited a context C[] 
 that
 distinguishes f1 from f2. Thus, in presence of Lazy IO, any equational
 theory that equates f1 and f2 cannot be sound. I don't think one can
 design such a context C[] using the regular, eager file IO.
 [...]

Thanks for clarifying, I see how I was wrong yesterday. I was to quick at
pointing the use of IO instead of observing that the pure functions f1 and f2
have themself an effect.

Thu
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Definitions of purity and Lazy IO

2009-03-06 Thread Duncan Coutts
On Thu, 2009-03-05 at 20:11 -0800, o...@okmij.org wrote:
   
 Before one invokes an equational theory or says that both these
 expressions are just integer subtraction, let me clarify the
 question: are f1 and f2 at least weakly observationally equivalent?
 That is, for any program context C[] such that C[f1] is well-typed, 
 the program C[f2] must too be well-typed, and if one can observe the
 result of C[f1] and of C[f2], the two observations must be identical. The
 observation of a program may (and often does) involve side-effects and
 IO (more on it below). The message posted yesterday exhibited a
 context C[] that distinguishes f1 from f2. Thus, in presence of Lazy
 IO, any equational theory that equates f1 and f2 cannot be sound. I
 don't think one can design such a context C[] using the regular, eager
 file IO.

I'm not sure that observational equivalence on IO computations the right
approach. For example it means we cannot explain imprecise exceptions.

http://research.microsoft.com/en-us/um/people/simonpj/papers/imprecise-exn.htm

With imprecise exceptions we define the meaning of a value that could
return one of a number of exceptions to actually return the whole set
(see Sections 3.4, 3.5 and 4 above). But on any particular run the
representative member of that set of exceptions can be different.

They give the example:

do
  v1 - getException ((1/0) + error Urk)
  v2 - getException ((1/0) + error Urk)
  return (v1 == v2)

The solution is that getException makes a non-deterministic choice each
time.

I expect that unsafeInterleaveIO can be explained in a similar way, via
non-determinism in the choice of interleaving of events. Any final
observations correspond to some particular interleaving, we just do not
know which one it is going to be until we inspect the value. The
semantics is that it can be any of them. Different runs of the program,
different compiler transformations or evaluation orders can change which
values we typically observe without breaking the semantics that it could
be any of the possible values.

This is somewhat stronger than the semantics of a random number
generator in IO because not all interleavings are possible and some
interleavings of effects do not interfere with each other.

 Thus Haskell even with IO may be called pure functional. With Lazy IO,
 it can no longer be called pure functional as different orders of
 evaluation of arguments of a function lead to different program
 observations.

The same is true of imprecise exceptions and yet we do not seem to be
worried about them destroying purity.

Duncan

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] string type class

2009-03-06 Thread Matthew Pocock
Hi,

It seems every time I look at hackage there is yet another stringy datatype. 
For lots of apps, the particular stringy datatype you use matters for 
performance but not algorithmic reasons. Perhaps this is a good time for 
someone to propose a stringy class?

Matthew
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Wolfgang Jeltsch
Am Freitag, 6. März 2009 13:33 schrieb Matthew Pocock:
 Hi,

 It seems every time I look at hackage there is yet another stringy
 datatype. For lots of apps, the particular stringy datatype you use matters
 for performance but not algorithmic reasons. Perhaps this is a good time
 for someone to propose a stringy class?

 Matthew

There is already the class IsString which was introduced for overloaded string 
literals.

However, the name is terrible. No other Haskell class I know of has an “Is” at 
its beginning. Classes don’t name properties (IsNum, IsMonoid, Has…).

Maybe you can also try to convince the masters of the IsString class to change 
the class name. My previous attempts through mailing list e-mails seemed to 
have no effect. :-( 

Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread minh thu
2009/3/6 Wolfgang Jeltsch g9ks1...@acme.softbase.org:
 Am Freitag, 6. März 2009 13:33 schrieb Matthew Pocock:
 Hi,

 It seems every time I look at hackage there is yet another stringy
 datatype. For lots of apps, the particular stringy datatype you use matters
 for performance but not algorithmic reasons. Perhaps this is a good time
 for someone to propose a stringy class?

 Matthew

 There is already the class IsString which was introduced for overloaded string
 literals.

 However, the name is terrible. No other Haskell class I know of has an “Is” at
 its beginning. Classes don’t name properties (IsNum, IsMonoid, Has…).

LLVM bindings use it a lot...

Cheers,
Thu
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Edward Kmett
I'd be more interested in a kitchen-sink List class. ByteString,
ByteString.Lazy, Text, [a], and the pending Text.Lazy all support the basic
operations of lists of a particular type. It'd be a fairly huge dictionary
by the current API design of those however. Its just a reiteration of the
classic Collection class example.

The biggest problem is that it requires either MPTCs/fundeps or type aliases
to make it work, which makes less progressive folks queasy about its
inclusion as a standard library.

On the other hand, if you're going that far, it'd be nice to factor out a
superclass or two for things like lookup/insert functionality, so you can
eliminate a major reason why Data.Map has to be imported qualified as well.

-Edward Kmett
On Fri, Mar 6, 2009 at 11:16 AM, minh thu not...@gmail.com wrote:

 2009/3/6 Wolfgang Jeltsch g9ks1...@acme.softbase.org:
  Am Freitag, 6. März 2009 13:33 schrieb Matthew Pocock:
  Hi,
 
  It seems every time I look at hackage there is yet another stringy
  datatype. For lots of apps, the particular stringy datatype you use
 matters
  for performance but not algorithmic reasons. Perhaps this is a good time
  for someone to propose a stringy class?
 
  Matthew
 
  There is already the class IsString which was introduced for overloaded
 string
  literals.
 
  However, the name is terrible. No other Haskell class I know of has an
 “Is” at
  its beginning. Classes don’t name properties (IsNum, IsMonoid, Has…).

 LLVM bindings use it a lot...

 Cheers,
 Thu
  ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Wolfgang Jeltsch
Am Freitag, 6. März 2009 17:31 schrieben Sie:
 What name would you suggest?

If we wouldn’t have to care about compatibility, I would name the class String 
and drop the type alias String.

It’s hard to come up with a good name since String is already taken. However, 
things like StringLike, Stringy, Chars etc. are probable still better than 
IsString since they describe the things (values of instances of that class) 
instead of stating a property of them.

Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread David Menendez
On Fri, Mar 6, 2009 at 12:05 PM, Wolfgang Jeltsch
g9ks1...@acme.softbase.org wrote:
 Am Freitag, 6. März 2009 17:31 schrieben Sie:
 What name would you suggest?

 If we wouldn’t have to care about compatibility, I would name the class String
 and drop the type alias String.

 It’s hard to come up with a good name since String is already taken. However,
 things like StringLike, Stringy, Chars etc. are probable still better than
 IsString since they describe the things (values of instances of that class)
 instead of stating a property of them.

How about CharSequence?

-- 
Dave Menendez d...@zednenem.com
http://www.eyrie.org/~zednenem/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread minh thu
2009/3/6 David Menendez d...@zednenem.com:
 On Fri, Mar 6, 2009 at 12:05 PM, Wolfgang Jeltsch
 g9ks1...@acme.softbase.org wrote:
 Am Freitag, 6. März 2009 17:31 schrieben Sie:
 What name would you suggest?

 If we wouldn’t have to care about compatibility, I would name the class 
 String
 and drop the type alias String.

 It’s hard to come up with a good name since String is already taken. However,
 things like StringLike, Stringy, Chars etc. are probable still better than
 IsString since they describe the things (values of instances of that class)
 instead of stating a property of them.

 How about CharSequence?

Characters ?
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Fully-funded doctoral studentships in dependently type programming at Oxford and Strathclyde

2009-03-06 Thread Jeremy . Gibbons
FULLY-FUNDED DOCTORAL STUDENTSHIPS IN
DEPENDENTLY-TYPED PROGRAMMING
AT OXFORD AND STRATHCLYDE

A new EPSRC-funded project on Reusability and Dependent Types
has just started, as a collaboration between the Functional
Programming Laboratory at the University of Nottingham (Thorsten
Altenkirch), the Algebra of Programming group at the University of
Oxford (Jeremy Gibbons), and the Mathematically Structured Programming
group at the University of Strathclyde (Neil Ghani and Conor McBride).

We are all familiar with Milner's slogan that well-typed programs
cannot go wrong. Types express properties of programs; more
expressive type systems - such as dependent typing - can state
properties more precisely, providing stronger guarantees of behaviour
and additional guidance in development.  However, this expressivity
comes at a price: more specific typing can reduce opportunities for
code reuse. The goal of this project is to investigate techniques for
promoting reuse without sacrificing precision; in particular, how can
we layer dependently typed programs, imposing stronger invariants onto
more general library code?

Two fully-funded doctoral studentships are available to work in this
area: one at Oxford (with JG) and one at Strathclyde (with CTM). Each
covers stipend, fees (at the home/EU rate), equipment, and travel, and
is for three and a half years from October 2009. The closing date for
applications is 15th April 2009.  For further details, see:

  http://web.comlab.ox.ac.uk/news/72-full.html
  http://personal.cis.strath.ac.uk/~conor/phds/

or contact one of the principal investigators on the project:

  Thorsten Altenkirch (t...@cs.nott.ac.uk) 
  Neil Ghani (n...@cis.strathclyde.ac.uk) 
  Jeremy Gibbons (j...@comlab.ox.ac.uk)
  Conor McBride (co...@strictlypositive.org)

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Sean Leather
 I'd be more interested in a kitchen-sink List class. ByteString,
 ByteString.Lazy, Text, [a], and the pending Text.Lazy all support the basic
 operations of lists of a particular type. It'd be a fairly huge dictionary
 by the current API design of those however. Its just a reiteration of the
 classic Collection class example.


Like this?

  http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ListLike

Sean
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Brandon S. Allbery KF8NH

On 2009 Mar 6, at 11:13, Wolfgang Jeltsch wrote:

Am Freitag, 6. März 2009 13:33 schrieb Matthew Pocock:

It seems every time I look at hackage there is yet another stringy
datatype. For lots of apps, the particular stringy datatype you use  
matters
for performance but not algorithmic reasons. Perhaps this is a good  
time

for someone to propose a stringy class?


There is already the class IsString which was introduced for  
overloaded string

literals.

However, the name is terrible. No other Haskell class I know of has  
an “Is” at

its beginning. Classes don’t name properties (IsNum, IsMonoid, Has…).


But their proper names were available.  String is already taken for  
a concrete type, which complicates things; IsString was the simplest  
answer (and possibly avoided bikeshedding by being the answer nobody  
liked :)


Maybe you can also try to convince the masters of the IsString class  
to change
the class name. My previous attempts through mailing list e-mails  
seemed to

have no effect. :-(


I think the problem here is you need to get buy-in from everyone  
involved, which includes libraries@ (String), ghc (IsString is  
recognized in order to enable string overloading), ByteString, and  
potentially anyone who uses any of the above.  That said, if you want  
to officially propose it you need to read up on how to properly  
propose such changes to librar...@haskell.org.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Brandon S. Allbery KF8NH

On 2009 Mar 6, at 12:24, David Menendez wrote:

How about CharSequence?



I'd be tempted on first sight to assume that's related to Data.Seq.

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Till Mossakowski

Sean Leather schrieb:

Like this?

  http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ListLike


Indeed, a class StringLike is included there as well.
Why not take or improve that one?

Till
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] string type class

2009-03-06 Thread Chris Kuklewicz

Matthew Pocock wrote:
It seems every time I look at hackage there is yet another stringy 
datatype. For lots of apps, the particular stringy datatype you use 
matters for performance but not algorithmic reasons. Perhaps this is a 
good time for someone to propose a stringy class?


Not likely.

I did define my own (private) class for regular expressions, to abstract over 
String, the ByteStrings, and Seq Char.  But it is used in one place and is a 
wart that should be removed.


The simple task of looping over the contents of a String (once, forward) is 
quite different from a Strict ByteString (using an index and a lookup).


This means for decent efficiency I need two copies of my code, hand specialized 
to each case.


tail or (x:xs) : very efficient for String (no allocation)
tail or uncons : not efficient for ByteString (allocation, might as well 
convert to [Char]


And indexing by Int is O(n) for String and O(1) for ByteString.

So there are few algorithm that can access both efficiently.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] string type class

2009-03-06 Thread Peter Verswyvelen
What about AString or AnyString?


 -Original Message-
 From: haskell-boun...@haskell.org [mailto:haskell-boun...@haskell.org]
 On Behalf Of Chris Kuklewicz
 Sent: Friday, March 06, 2009 8:17 PM
 To: Matthew Pocock
 Cc: haskell@haskell.org
 Subject: Re: [Haskell] string type class
 
 Matthew Pocock wrote:
  It seems every time I look at hackage there is yet another stringy
  datatype. For lots of apps, the particular stringy datatype you use
  matters for performance but not algorithmic reasons. Perhaps this is
 a
  good time for someone to propose a stringy class?
 
 Not likely.
 
 I did define my own (private) class for regular expressions, to
 abstract over
 String, the ByteStrings, and Seq Char.  But it is used in one place and
 is a
 wart that should be removed.
 
 The simple task of looping over the contents of a String (once,
 forward) is
 quite different from a Strict ByteString (using an index and a lookup).
 
 This means for decent efficiency I need two copies of my code, hand
 specialized
 to each case.
 
 tail or (x:xs) : very efficient for String (no allocation)
 tail or uncons : not efficient for ByteString (allocation, might as
 well
 convert to [Char]
 
 And indexing by Int is O(n) for String and O(1) for ByteString.
 
 So there are few algorithm that can access both efficiently.
 
 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Definitions of purity and Lazy IO

2009-03-06 Thread Dan Doel
On Thursday 05 March 2009 11:48:33 pm Jonathan Cast wrote:
  That is, for any program context C[] such that C[f1] is well-typed,
  the program C[f2] must too be well-typed, and if one can observe the
  result of C[f1] and of C[f2], the two observations must be identical.

 Every time?  For every context?  What about the context

   do
  x - getStdRandom random
  y - getStdRandom random
  print $ [] x y

 ?  By the standard above, f1 is not even observationally equivalent to
 *itself*.

One could theoretically dismiss this by saying that any particular IO context 
contains some random seed that determines the results of generating the random 
numbers. So, when you run the program and get different results, that's due to 
your actually running it in different contexts (with different seeds).

However, it's possible to get this sort of non-determinism in other ways. For 
instance, consider the following fragment:

  do ctr - newMVar 0
 let inc = do i - takeMVar ctr ; putMVar ctr (i+1) ; return i
 v1 - newEmptyMVar
 v2 - newEmptyMVar
 forkIO $ do inc = putMVar v1
 forkIO $ do inc = putMVar v2
 i1 - takeMVar v1
 i2 - takeMVar v2
 print $ i1 - i2

If forkIO provides real concurrency, this fragment should be capable of 
displaying either -1 or 1 (of course, everything happens too fast here, so I 
always end up getting -1, but I think the point is sound). So it's another 
context where (-) isn't equivalent to itself.

So, to fix this up in the same way as the random example, you have to add 
something to the context that will supposedly completely determine the 
scheduling order of the threads in the program, so that when threads run in a 
different order, and you get a different result, you can say that you ran the 
program in a different context. But once we start doing that, how much more 
contrived is it to say that each context has some hidden state that determines 
how effects are interleaved with unsafeInterleaveIO such that we see the 
results we do (even though we know that isn't what's going on operationally; 
it isn't what's going on operationally with concurrency, either).

The real issue is that lazy IO sometimes leads people to write buggier 
programs than they otherwise might. The same is true of, say, head and tail, 
but they are not impure by Sabry's definition, nor do they break referential 
transparency. They're 'impure' in that they're partial functions, but I'm not 
sure everyone's ready for Haskell to become a total language yet. :)

Cheers,
-- Dan
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-06 Thread Daryoush Mehrtash
Two questions:

a)   This  chat server implementation doesn't actually close the connection
as a real one would need to do.  If you use forever  is there a way to end
the loop so as to end the connection?

b) In Section 5 of this paper:
http://www.cs.yale.edu/~hl293/download/leak.pdf

Comparing the definition of eSF and e reveals that the primary difference is
 in
 the fixed-point operators they use. e uses Haskell’s built-in fixed-point
 operator,
 which is equivalent to the standard:

 fix f = f (fix f)
 eSF, on the other hand, is defined in terms of the loop combinator, which
 ties the loop
 tighter than the standard fixed-point operator. In particular, note in
 Figure 6 that
 loop computes the value-level fixed point as z, but re-uses itself in the
 continuation
 part. This is the key to avoiding the space leak.


My reading is that the fix operator, at least in some cases, causes space
leak.   Where as the arrow's loop, which uses let model,   doesn't have
this issue.

Question:  Do I need to worry about space leak if I am using the fix to
instead of the let?

Thanks

Daryoush
2009/3/5 Luke Palmer lrpal...@gmail.com

 On Thu, Mar 5, 2009 at 6:27 PM, Donn Cave d...@avvanta.com wrote:

 Quoth Jonathan Cast jonathancc...@fastmail.fm:

  You can certainly use let:
 
reader - forkIO $ let loop = do
(nr', line) - readChan chan'
when (nr /= nr') $ hPutStrLn hdl line
loop
  in loop
 
  But the version with fix is clearer (at least to people who have fix in
  their vocabulary) and arguably better style.

 Would you mind presenting the better style argument?  To me, the
 above could not be clearer, so it seems like the version with fix
 could be only as clear, at best.


 I like using fix when it's simple rather than let, because it tells me the
 purpose of the binding.  eg., when I see

 let foo = ...

 Where ... is fairly long, I'm not sure what the purpose of foo is, or what
 its role is in the final computation.  It may not be used at all, or passed
 to some modifier function, or I don't know what.  Whereas with:

fix $ \foo - ...

 I know that whatever ... is, it is what is returne, and the purpose of foo
 is to use that return value in the expression itself.

 I know that it's a simple matter of scanning to the corresponding in, but
 let can be used for a lot of things, where as fix $ \foo is basically only
 for simple knot-tying.  Now, that doesn't say anything about the use of fix
 without an argument (passed to an HOF) or with a tuple as an argument or
 many other cases, which my brain has not chunked nearly as effectively.  I
 think fix is best with a single, named argument.

 Luke

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-06 Thread Lennart Augustsson
Personally I would not use fix.  I don't think it improves readability.

  -- Lennart

2009/3/5 Daryoush Mehrtash dmehrt...@gmail.com:
 In this chat server implementation
 http://www.haskell.org/haskellwiki/Implement_a_chat_server

 forkIO is used with fix as in:

 reader - forkIO $ fix $ \loop - do

 (nr', line) - readChan chan'
 when (nr /= nr') $ hPutStrLn hdl line

 loop

 Do you have to use fix?  Or is there a way to write this with a let?

 --
 Daryoush



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell Logo Voting will start soon!

2009-03-06 Thread Bayley, Alistair
 The voting system we'll use is the Condorcet Internet Voting 
 System (http://www.cs.cornell.edu/andru/civs.html 
 ).  The poll won't be public, but every subscriber to Haskell-Cafe  
 will get a (private) voting ballot by email.

 I'll supervise the poll and make sure it's started, stopped and all  
 Haskell-Cafe subscribers get a ballot (Simon Marlow provided 
 the email addresses).

 I'd love to hear about anything that I missed and/or that might  
 influence the voting process in a significant way.  (There are  
 probably some people subscribed with multiple addresses, but I'll be  
 using the subscriber list from yesterday, so signing up now 
 with lots of addresses won't get you more ballots ;)

I was wondering if you were addressing duplication. I've been subscribed to 
café for a long time with at least two email addresses (work and home) but I 
don't think I should be getting two votes.

Alistair
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Interesting problem from Bird (4.2.13)

2009-03-06 Thread Gleb Alexeyev

Gleb Alexeyev wrote:

instance Eq a = Eq (CatList a) where
a == b = case (viewCL a, viewCL b) of
   (Just (x, xs), Just (y, ys)) - x==y  xs == ys
   (Nothing, Nothing)   - True
   _- False
I just realized that my solution is needlessly verbose, the following 
instance suffices:

 instance Eq a = Eq (CatList a) where
 a == b = viewCL a == viewCL b

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Logo Voting will start soon!

2009-03-06 Thread Magnus Therning
On Fri, Mar 6, 2009 at 9:08 AM, Bayley, Alistair
alistair.bay...@invesco.com wrote:
 The voting system we'll use is the Condorcet Internet Voting
 System (http://www.cs.cornell.edu/andru/civs.html
 ).  The poll won't be public, but every subscriber to Haskell-Cafe
 will get a (private) voting ballot by email.

 I'll supervise the poll and make sure it's started, stopped and all
 Haskell-Cafe subscribers get a ballot (Simon Marlow provided
 the email addresses).

 I'd love to hear about anything that I missed and/or that might
 influence the voting process in a significant way.  (There are
 probably some people subscribed with multiple addresses, but I'll be
 using the subscriber list from yesterday, so signing up now
 with lots of addresses won't get you more ballots ;)

 I was wondering if you were addressing duplication. I've been subscribed to 
 café for a long time with at least two email addresses (work and home) but I 
 don't think I should be getting two votes.

Given the size of the group, and the extremely high standard of the
members when it comes to moral fibre, common sense, intelligense, etc,
etc, do we really need to enforce prevention of duplication?

:-)

/M

-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org  Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell Logo Voting will start soon!

2009-03-06 Thread Bayley, Alistair
  I'd love to hear about anything that I missed and/or that might
  influence the voting process in a significant way.  (There are
  probably some people subscribed with multiple addresses, 
 but I'll be
  using the subscriber list from yesterday, so signing up now
  with lots of addresses won't get you more ballots ;)
 
  I was wondering if you were addressing duplication. I've 
 been subscribed to café for a long time with at least two 
 email addresses (work and home) but I don't think I should be 
 getting two votes.
 
 Given the size of the group, and the extremely high standard of the
 members when it comes to moral fibre, common sense, intelligense, etc,
 etc, do we really need to enforce prevention of duplication?

Well, that is one possible solution (let it ride). I don't know what the level 
of email address duplication is, so I don't know the risk.

Voting systems generally have to make a lot of effort to be fair, otherwise 
detractors can reasonably claim that the result is not valid. And as this is 
somewhat of a bike-shed level decision, you should expect a great deal of 
interest and passion!

Alistair
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] Lazy IO breaks purity

2009-03-06 Thread Duncan Coutts
On Thu, 2009-03-05 at 13:08 +, Simon Marlow wrote:
 Lennart Augustsson wrote:
  I don't see any breaking of referential transparence in your code.
  Every time you do an IO operation the result is basically
  non-deterministic since you are talking to the outside world.
  You're assuming the IO has some kind of semantics that Haskell makes
  no promises about.
  
  I'm not saying that this isn't a problem, because it is.
  But it doesn't break referential transparency, it just makes the
  semantics of IO even more complicated.
  
  (I don't have a formal proof that unsafeInterleaveIO cannot break RT,
  but I've not seen an example where it does yet.)
 
 So the argument is something like: we can think of the result of a call to 
 unsafeInterleaveIO as having been chosen at the time we called 
 unsafeInterleaveIO, rather than when its result is actually evaluated.

But surely that is the mistake. With IO, bound values depend solely on
the effects that happened previously (no time travel). With ordinary
sequential IO, that's easy to understand and explain because the binding
and the effects happen at the same time/place.

The extension to unsafeInterleaveIO is not to pretend that the value is
chosen at the point it is bound. The value depends on the effects not
the binding. The extension is that we separate the binding point and the
effects. The effects can be arbitrarily interleaved with other effects.
The value we get does potentially depend on all those prior effects.

The arbitrary interleaving of effects is obviously weaker than them
happening now, in sequence, before the next action. This works well when
the deferred side effect do not interfere with other side effects and is
non-deterministic when they do interfere, as in Oleg's example. It
doesn't break referential transparency though, thanks to memoisation we
only observe one final value.

The standard libs, being sensible, mostly only use unsafeInterleaveIO in
the cases where we expect little interference. Note how Oleg has to go
out of his way to construct the example, circumventing the semi-closed
Handle state which was designed to stop people from shooting themselves
in the foot.

There used to be a real example where unsafeInterleaveIO did break
referential transparency. That was due to concurrency and lack of
locking.
http://hackage.haskell.org/trac/ghc/ticket/986
When the bug was around there was a *single* pure value that if used by
two different threads could be observed to be different. There's nothing
similar happening here.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] FRP + physics / status of hpysics

2009-03-06 Thread Peter Verswyvelen
Regarding hpysics, did anybody did some experiments with this? The blog
seems to be inactive since december 2008; has development ceased?
Do alternatives exist? Maybe good wrappers (hopefully pure...)  around
existing engines?

Integrating hpysics with Grapefruit might be a good topic for the Hackaton,
trying to make a simple game (e.g. Pong or Breakout) without using recursive
signal functions, but with correct collision response and better-than-Euler
integration, all handled by the physics engine. Other FRP engines could be
tried, but Grapefruit hacking is already a topic on the Hackaton, so it
would combine efforts.

It feels as if none of the current FRP engines can handle physics correctly,
since a typical physics implementations requires time backtracking, in the
sense that when you want to advance the current simulation time by a
timestep, collision events can happen during that time interval, and hence
the FRP engine can only advance time until the earliest collision event. So
to do physics *with* an FRP engine, the implementation and maybe even
semantics of the FRP system might need to be changed. *Using* a physics
engine as a blackbox inside an FRP system might make more sense.

Thanks to Wolfgang Jeltsch and Christopher Lane Hinson for having a
discussion with me that lead to this.  Interestingly a similar discussion
was help by other people in the Reactive mailing list at the same time :-)

Cheers,
Peter Verswyvelen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell Logo Voting will start soon!

2009-03-06 Thread Eelco Lempsink

On 6 mrt 2009, at 00:03, Achim Schneider wrote:

Eelco Lempsink ee...@lempsink.nl wrote:


The poll won't be public, but every subscriber to Haskell-Cafe
will get a (private) voting ballot by email.


What about us gmane users?



Good point!  All of you can send me your email address (directly,  
please don't use the group/list) if you want to participate.  I'll  
merge that with the subscribers list.


Please include haskell logo voting ballot request in the subject or  
body and write a small but unique message so I can (more or less)  
check you're not just trying to get extra votes and to make it more  
likely it passes my spamfilter (I'll check my spamfolder, but with the  
thousands messages a day pouring in to it I might miss your message).


As Magnus Therning remarked, I also trust the size of the group, and  
the extremely high standard of the members when it comes to moral  
fibre, common sense, intelligense, etc to prevent duplication or at  
least a significant impact on the result ;)


Consider yourself added to the ballot list, Achim, and thanks for  
bringing this point up.


--
Regards,

Eelco Lempsink



PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread jean-christophe mincke
Hello Peter,

The backtraking in time to solve the collision problem you mentionned is
not, in my opinion, efficient.

From a previous life as an aerospace engineer, I remember that two other
solutions exist to handle contact or collision constraints, at least if 2nd
order diff. equations are used to describe the motion of a solid with mass.

In any case, you have to use a 'serious' variable time step integration
algorithm (I.E Runge-Kutta).

1. The naive one: introduce a (virtual) spring between every 2 objets that
may collide.  When these objets get closer, the spring is compressed and
tries to push them back.
If the mass/velocity are high, that leads to a stiff system and the time
steps may become very small.
However, this solution does not require any modification of the equations of
motion.

2. The serious one: modify or augment the equations of motion so that the
collision constraints are implicitly taken into account. If I remember well,
the magical trick is to use langrangian multipliers.
The difficult here (especially in the context of aFRP) is to derive the new
equations.

Hope it helps

Regards

Jean-Christophe Mincke


2009/3/6 Peter Verswyvelen bugf...@gmail.com

 Regarding hpysics, did anybody did some experiments with this? The blog
 seems to be inactive since december 2008; has development ceased?
 Do alternatives exist? Maybe good wrappers (hopefully pure...)  around
 existing engines?

 Integrating hpysics with Grapefruit might be a good topic for the Hackaton,
 trying to make a simple game (e.g. Pong or Breakout) without using recursive
 signal functions, but with correct collision response and better-than-Euler
 integration, all handled by the physics engine. Other FRP engines could be
 tried, but Grapefruit hacking is already a topic on the Hackaton, so it
 would combine efforts.

 It feels as if none of the current FRP engines can handle physics
 correctly, since a typical physics implementations requires time
 backtracking, in the sense that when you want to advance the current
 simulation time by a timestep, collision events can happen during that time
 interval, and hence the FRP engine can only advance time until the earliest
 collision event. So to do physics *with* an FRP engine, the implementation
 and maybe even semantics of the FRP system might need to be changed. *Using*
 a physics engine as a blackbox inside an FRP system might make more sense.

 Thanks to Wolfgang Jeltsch and Christopher Lane Hinson for having a
 discussion with me that lead to this.  Interestingly a similar discussion
 was help by other people in the Reactive mailing list at the same time :-)

 Cheers,
 Peter Verswyvelen






 ___
 Reactive mailing list
 react...@haskell.org
 http://www.haskell.org/mailman/listinfo/reactive


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-06 Thread Andrea Vezzosi
2009/3/6 Daryoush Mehrtash dmehrt...@gmail.com:
 Two questions:

 a)   This  chat server implementation doesn't actually close the connection
 as a real one would need to do.  If you use forever  is there a way to end
 the loop so as to end the connection?
Yes, throw an exception and catch it from outside the forever.

 b) In Section 5 of this paper:
 http://www.cs.yale.edu/~hl293/download/leak.pdf

 Comparing the definition of eSF and e reveals that the primary difference
 is in
 the fixed-point operators they use. e uses Haskell’s built-in fixed-point
 operator,
 which is equivalent to the standard:

 fix f = f (fix f)
 eSF, on the other hand, is defined in terms of the loop combinator, which
 ties the loop
 tighter than the standard fixed-point operator. In particular, note in
 Figure 6 that
 loop computes the value-level fixed point as z, but re-uses itself in the
 continuation
 part. This is the key to avoiding the space leak.

 My reading is that the fix operator, at least in some cases, causes space
 leak.   Where as the arrow's loop, which uses let model,   doesn't have
 this issue.

 Question:  Do I need to worry about space leak if I am using the fix to
 instead of the let?
the definition of fix in Data.Function[1] actually uses let:
fix :: (a - a) - a
fix f = let x = f x in x

[1] http://darcs.haskell.org/packages/base/Data/Function.hs

 Thanks

 Daryoush
 2009/3/5 Luke Palmer lrpal...@gmail.com

 On Thu, Mar 5, 2009 at 6:27 PM, Donn Cave d...@avvanta.com wrote:

 Quoth Jonathan Cast jonathancc...@fastmail.fm:

  You can certainly use let:
 
    reader - forkIO $ let loop = do
        (nr', line) - readChan chan'
        when (nr /= nr') $ hPutStrLn hdl line
        loop
      in loop
 
  But the version with fix is clearer (at least to people who have fix in
  their vocabulary) and arguably better style.

 Would you mind presenting the better style argument?  To me, the
 above could not be clearer, so it seems like the version with fix
 could be only as clear, at best.

 I like using fix when it's simple rather than let, because it tells me the
 purpose of the binding.  eg., when I see
     let foo = ...
 Where ... is fairly long, I'm not sure what the purpose of foo is, or what
 its role is in the final computation.  It may not be used at all, or passed
 to some modifier function, or I don't know what.  Whereas with:
    fix $ \foo - ...
 I know that whatever ... is, it is what is returne, and the purpose of foo
 is to use that return value in the expression itself.
 I know that it's a simple matter of scanning to the corresponding in,
 but let can be used for a lot of things, where as fix $ \foo is basically
 only for simple knot-tying.  Now, that doesn't say anything about the use of
 fix without an argument (passed to an HOF) or with a tuple as an argument or
 many other cases, which my brain has not chunked nearly as effectively.  I
 think fix is best with a single, named argument.
 Luke
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hint and Ambiguous Type issue

2009-03-06 Thread Daniel Gorín
I think you can achieve what you want but you need to use the correct  
types for it. Remember that when you write:


getFilterMainStuff :: Deliverable a = FilePath - Interpreter (Path,  
Filter a)


the proper way to read the signature is the caller of  
getFilterMainStuff is entitled to pick the type of a, as long as it  
picks an instance of Deliverable. Contrast this with a method  
declaration in Java where:


public Set getKeys()

is to be read: The invoked object may pick the type of the result, as  
long as it is a subclass of (or implements) Set.


When you say that you want to apply fMain to a (Config, Email) and  
get back a Deliverable a, I think you mean that fMain picks the type  
for a (and has to be an instance of Deliverable). There two ways to do  
this in Haskell:


1) You don't. If you know that your possible Deliverables are just  
FlatEmail and MaildirEmail, then the idiomatic way of doing this would  
be to turn Deliverable into an ADT:


data Deliverable = FlatEmail  | MaildirEmail  deriving  
(Typeable)

getFilterMainStuff :: FilePath - Interpreter (Path, Filter Deliverable)

2) Existential types. If, for some reason, you need your dynamic  
code to be able to define new deliverables, then you need to use  
the extension called existential types.


-- using GADT syntax
data SomeDeliverable where Wrap :: Deliverable a = a - SomeDeliverable

getFilterMainStuff :: FilePath - Interpreter (Path, Filter  
SomeDeliverable)


This basically resembles the contract of the Java world: if you run  
fMain you will get a value of type SomeDeliverable; you can pattern- 
match it and will get something whose actual type you don't know, but  
that it is an instance of class Deliverable.


See http://www.haskell.org/haskellwiki/Existential_type

Good luck!

Daniel

On Mar 6, 2009, at 2:33 AM, Joseph Fredette wrote:

Okay, I think I understand... I got so hung up thinking the error  
had to be in the Interpreter code, I didn't bother to look in the  
caller...


But every answer breeds another question... The practical reason for  
inferring fMain as being of type Deliverable a = Filter a, is to  
apply it (via runReader) to a (Config, Email) and get back a  
Deliverable a, then to use the deliverIO method on the result -- my  
question is, it appears I have to know the specific type of a in  
order to get the thing to typecheck, but in order to use it, I need  
to not know it...


Perhaps, in fact, I'm doing this wrong. Thanks for the help Daniel,  
everyone...


/Joe

Daniel Gorín wrote:

Ok, so I've pulled the latest version and the error I get now is:

Hackmain.hs:70:43:
   Ambiguous type variable `a' in the constraint:
 `Deliverable a'
   arising from a use of `getFilterMainStuff' at Hackmain.hs: 
70:43-60
   Probable fix: add a type signature that fixes these type  
variable(s)


Function getFilterMainStuff compiles just fine . The offending line  
is in buildConf and reads:


 (inboxL, fMain) - runUnsafeInterpreter . getFilterMainStuff $  
filterMainL


The problem is that GHC can't figure out the type of fMain. It  
infers (Filter a), but doesn't know what is a and therefore how to  
build a proper dictionary to pass to getFilterMainStuff.


Observe that you would get a similar error message if you just  
defined:


 f = show . read

I can get it to compile by providing a type annotation for fMain:

 (inboxL, fMain) - runUnsafeInterpreter . getFilterMainStuff $  
filterMainL

 let _ = fMain :: Filter MaildirEmail

So once you use fMain somewhere and GHC can infer it's type,  
everything should work fine.


Daniel

On Mar 5, 2009, at 11:26 PM, Joseph Fredette wrote:

Oh, crap- I must have never pushed the latest patches, I did put  
the typeable instances in all the appropriate places. And provided  
a (maybe incorrect? Though I'm fairly sure that shouldn't affect  
the bug I'm having now) Typeable implementation for Reader, but I  
still get this ambiguous type. I'll push the current version asap.


Thanks.

/Joe

Daniel Gorín wrote:

Hi

I've downloaded Hackmain from patch-tag, but I'm getting a  
different error. The error I get is:


Hackmain.hs:63:10:
  No instance for (Data.Typeable.Typeable2
 Control.Monad.Reader.Reader)
arising from a use of `interpret' at Hackmain.hs:63:10-67

Hint requires the interpreted values to be an instance of  
Typeable in order to check, in runtime, that the interpreted  
value matches the type declared at compile. Therefore, you need  
to make  sure that (Filter a) is indeed an instance of Typeable.


Since you have Filter a = Reader (Config, Email) a, you probably  
need to


- Derive Config and Email instances for Filter,

- Manually provide Typeable instances for Reader a b, something  
along the lines of:


instance (Typeable a, Typeable b) = Typeable (Reader a b) where...

(I don't know why this isn't done in the mtl)

- Change the signature to:

getFilterMain :: (Typeable a, Deliverable a) = FilePath -  

[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread Peter Verswyvelen
Thanks for the info. With backtracking I actually meant the computation of
the exact collision time, and let (part of the simulation) only go that far,
so it's not really back tracking in the physics engine; does that
correspond to your 2nd proposal. I just got this from a physics
bookhttp://www.amazon.com/Dynamic-Simulations-Multibody-Systems-Coutinho/dp/038795192Xthat
implements it that way (at least that why I got from reading it
diagonally, the books contains a lot of advanced math...)
But do you mean that with your proposed methods the simulation will advance
a full time step anyway, so the time interval does not need to broken up
into smaller ones, where each sub-interval ends with a collision event? I
wander how this could work since most of the time in a game when a collision
happens, the game logic decides what forces to apply next, so the simulation
can't really advance a full time step anyway (although that could be hacked
I guess). Converting the game logic into differential equations with
constraints seems very hard.

However, I must admit I haven't used any modern physics engines the last 5
years or so... But it's interesting to hear from people that did.


On Fri, Mar 6, 2009 at 11:59 AM, jean-christophe mincke 
jeanchristophe.min...@gmail.com wrote:

 Hello Peter,

 The backtraking in time to solve the collision problem you mentionned is
 not, in my opinion, efficient.

 From a previous life as an aerospace engineer, I remember that two other
 solutions exist to handle contact or collision constraints, at least if 2nd
 order diff. equations are used to describe the motion of a solid with mass.

 In any case, you have to use a 'serious' variable time step integration
 algorithm (I.E Runge-Kutta).

 1. The naive one: introduce a (virtual) spring between every 2 objets that
 may collide.  When these objets get closer, the spring is compressed and
 tries to push them back.
 If the mass/velocity are high, that leads to a stiff system and the time
 steps may become very small.
 However, this solution does not require any modification of the equations
 of motion.

 2. The serious one: modify or augment the equations of motion so that the
 collision constraints are implicitly taken into account. If I remember well,
 the magical trick is to use langrangian multipliers.
 The difficult here (especially in the context of aFRP) is to derive the new
 equations.

 Hope it helps

 Regards

 Jean-Christophe Mincke


 2009/3/6 Peter Verswyvelen bugf...@gmail.com

 Regarding hpysics, did anybody did some experiments with this? The blog
 seems to be inactive since december 2008; has development ceased?

 Do alternatives exist? Maybe good wrappers (hopefully pure...)  around
 existing engines?

 Integrating hpysics with Grapefruit might be a good topic for the
 Hackaton, trying to make a simple game (e.g. Pong or Breakout) without using
 recursive signal functions, but with correct collision response and
 better-than-Euler integration, all handled by the physics engine. Other FRP
 engines could be tried, but Grapefruit hacking is already a topic on the
 Hackaton, so it would combine efforts.

 It feels as if none of the current FRP engines can handle physics
 correctly, since a typical physics implementations requires time
 backtracking, in the sense that when you want to advance the current
 simulation time by a timestep, collision events can happen during that time
 interval, and hence the FRP engine can only advance time until the earliest
 collision event. So to do physics *with* an FRP engine, the implementation
 and maybe even semantics of the FRP system might need to be changed. *Using*
 a physics engine as a blackbox inside an FRP system might make more sense.

 Thanks to Wolfgang Jeltsch and Christopher Lane Hinson for having a
 discussion with me that lead to this.  Interestingly a similar discussion
 was help by other people in the Reactive mailing list at the same time :-)

 Cheers,
 Peter Verswyvelen






 ___
 Reactive mailing list
 react...@haskell.org
 http://www.haskell.org/mailman/listinfo/reactive



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: FRP + physics / status of hpysics

2009-03-06 Thread Achim Schneider
Peter Verswyvelen bugf...@gmail.com
wrote:

 Integrating hpysics with Grapefruit might be a good topic for the
 Hackaton, trying to make a simple game (e.g. Pong or Breakout)

Be sure to have more than two simultaneously moving collision objects
besides paddles in the specs, or it won't get close enough to a real
physics simulation... well, at least I know how to get an A for a
breakout by not allowing the ball to travel below the top of the paddle
to hide a bug. Devastatingly simple vector equations get quite complex
if you happen to stare at their assembly incarnations.

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread jean-christophe mincke
Peter,

Backtracking: yes it is the computation of the exact collision time.

I gave 2 solutions that can be used in multi-body dynamics, in general (that
is, with 2nd order derivatives). I am not a game writing  specialist but, if
I understand you, I would say that, in a game,  we have 1st order diff
equations of the form

x(T) = integrate(speed(x(t), t), T0, T) or on a diff form : dx/dt = f( x(t),
t)

where speed depends, without any lost of generality, on t and x(t).

In case of a collision, that is when  x(t) = Collision_position (i.e a ball
boucing against a fixed wall), the speed may change discontinuously (i.e.
speed = - speed).  It will happen at an unknown time Tc. It is possible to
find Tc, accurately, by solving the equation (i.e Newton Raphson or another
root finding method).

x(Tc) = Collision_position
where
x(Tc) = integrate(speed'(x(t), t), T0, Tc) where speed' is the speed as if
there were no obstacle.

So I would say that the main algorithm to solve such a problem is

1.Suppose  that S(t) is the description (equations) of your system with t =
some initial  t0. This system will behave continuously (that is, all its
state variables will be continuous) between t0 and t1. In your example t1 is
the exact moment of the collision.

2. With a combination of a integration/root finding algorithm, find t1 - you
get all the state variables for free because to find t1, we need to
integrate the system.

3 Change S(t) to S'(t) where t = t1. S'(t) is a description of the system
that takes into account the effect of the collision.

Continue with first step.

Rem: If a accurate root finding algorithm is too costly, another solution
is, knowing S(t) at some tn, compute S(t) at tn+1 without paying attention
to any collision. Than using S(tn) to check whether a collision took place.

If no collision: continue and compute S(tn+2) etc.
If collision: Assume that the system is linear between tn and tn+1 and then
solve the linear equation:










On Fri, Mar 6, 2009 at 1:01 PM, Peter Verswyvelen bugf...@gmail.com wrote:

 Thanks for the info. With backtracking I actually meant the computation of
 the exact collision time, and let (part of the simulation) only go that far,
 so it's not really back tracking in the physics engine; does that
 correspond to your 2nd proposal. I just got this from a physics 
 bookhttp://www.amazon.com/Dynamic-Simulations-Multibody-Systems-Coutinho/dp/038795192Xthat
  implements it that way (at least that why I got from reading it
 diagonally, the books contains a lot of advanced math...)
 But do you mean that with your proposed methods the simulation will advance
 a full time step anyway, so the time interval does not need to broken up
 into smaller ones, where each sub-interval ends with a collision event? I
 wander how this could work since most of the time in a game when a collision
 happens, the game logic decides what forces to apply next, so the simulation
 can't really advance a full time step anyway (although that could be hacked
 I guess). Converting the game logic into differential equations with
 constraints seems very hard.

 However, I must admit I haven't used any modern physics engines the last 5
 years or so... But it's interesting to hear from people that did.


 On Fri, Mar 6, 2009 at 11:59 AM, jean-christophe mincke 
 jeanchristophe.min...@gmail.com wrote:

 Hello Peter,

 The backtraking in time to solve the collision problem you mentionned is
 not, in my opinion, efficient.

 From a previous life as an aerospace engineer, I remember that two other
 solutions exist to handle contact or collision constraints, at least if 2nd
 order diff. equations are used to describe the motion of a solid with mass.

 In any case, you have to use a 'serious' variable time step integration
 algorithm (I.E Runge-Kutta).

 1. The naive one: introduce a (virtual) spring between every 2 objets that
 may collide.  When these objets get closer, the spring is compressed and
 tries to push them back.
 If the mass/velocity are high, that leads to a stiff system and the time
 steps may become very small.
 However, this solution does not require any modification of the equations
 of motion.

 2. The serious one: modify or augment the equations of motion so that the
 collision constraints are implicitly taken into account. If I remember well,
 the magical trick is to use langrangian multipliers.
 The difficult here (especially in the context of aFRP) is to derive the
 new equations.

 Hope it helps

 Regards

 Jean-Christophe Mincke


 2009/3/6 Peter Verswyvelen bugf...@gmail.com

 Regarding hpysics, did anybody did some experiments with this? The blog
 seems to be inactive since december 2008; has development ceased?

 Do alternatives exist? Maybe good wrappers (hopefully pure...)  around
 existing engines?

 Integrating hpysics with Grapefruit might be a good topic for the
 Hackaton, trying to make a simple game (e.g. Pong or Breakout) without using
 recursive signal functions, but with 

[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread jean-christophe mincke
Sorry, my message was inadvertently sent - hit the wrong key - a gmail
feature

Peter,

Backtracking: yes it is the computation of the exact collision time.

I gave 2 solutions that can be used in multi-body dynamics, in general (that
is, with 2nd order derivatives). I am not a game writing  specialist but, if
I understand you, I would say that, in a game,  we have 1st order diff
equations of the form

x(T) = integrate(speed(x(t), t), T0, T) or on a diff form : dx/dt = f( x(t),
t)

where speed depends, without any lost of generality, on t and x(t).

In case of a collision, that is when  x(t) = Collision_position (i.e a ball
boucing against a fixed wall), the speed may change discontinuously (i.e.
speed = -speed).  It will happen at an unknown time Tc. It is possible to
find Tc, accurately, by solving the equation (i.e Newton Raphson or another
root finding method).

x(Tc) = Collision_position
where
x(Tc) = integrate(speed'(x(t), t), T0, Tc) where speed' is the speed as if
there were no obstacle.

So I would say that the main algorithm to solve such a problem is:

1.Suppose  that S(t) is the description (equations) of your system with t =
some initial  t0. This system will behave continuously (that is, all its
state variables will be continuous) between t0 and t1. In your example t1 is
the exact moment of the collision.

2. With a combination of a integration/root finding algorithm, find t1 - you
get all the state variables for free because to find t1, we need to
integrate the system.

3 Change S(t) to S'(t) where t = t1. S'(t) is a description of the system
that takes into account the effect of the collision.

4 Continue with first step.

Rem: If an accurate root finding algorithm is too costly, another solution
is: knowing S(t) at some tn, compute S(t) at tn+1 without paying attention
to any collision. Than use S(tn+1) to check whether a collision took place.

If no collision: continue and compute S(tn+2) etc.
If collision: Assume that the system is linear between tn and tn+1 and then
solve the linear equation:

  x(tc) = x(tn) + (x(tn+1) - x(tn))/(tn+1 - tn) * tc =
Collision_position

Once tc is known, replace your system as explained above with S'(t) , t= tc

The condition here is that [tn, tn+1] must be choosen small enough in order
that the assumption if linearity holds.

Regards

Jean-Christophe Mincke


On Fri, Mar 6, 2009 at 2:26 PM, jean-christophe mincke 
jeanchristophe.min...@gmail.com wrote:

 Peter,

 Backtracking: yes it is the computation of the exact collision time.

 I gave 2 solutions that can be used in multi-body dynamics, in general
 (that is, with 2nd order derivatives). I am not a game writing  specialist
 but, if I understand you, I would say that, in a game,  we have 1st order
 diff equations of the form

 x(T) = integrate(speed(x(t), t), T0, T) or on a diff form : dx/dt = f(
 x(t), t)

 where speed depends, without any lost of generality, on t and x(t).

 In case of a collision, that is when  x(t) = Collision_position (i.e a ball
 boucing against a fixed wall), the speed may change discontinuously (i.e.
 speed = - speed).  It will happen at an unknown time Tc. It is possible to
 find Tc, accurately, by solving the equation (i.e Newton Raphson or another
 root finding method).

 x(Tc) = Collision_position
 where
 x(Tc) = integrate(speed'(x(t), t), T0, Tc) where speed' is the speed as if
 there were no obstacle.

 So I would say that the main algorithm to solve such a problem is

 1.Suppose  that S(t) is the description (equations) of your system with t
 = some initial  t0. This system will behave continuously (that is, all its
 state variables will be continuous) between t0 and t1. In your example t1 is
 the exact moment of the collision.

 2. With a combination of a integration/root finding algorithm, find t1 -
 you get all the state variables for free because to find t1, we need to
 integrate the system.

 3 Change S(t) to S'(t) where t = t1. S'(t) is a description of the system
 that takes into account the effect of the collision.

 Continue with first step.

 Rem: If a accurate root finding algorithm is too costly, another solution
 is, knowing S(t) at some tn, compute S(t) at tn+1 without paying attention
 to any collision. Than using S(tn) to check whether a collision took place.

 If no collision: continue and compute S(tn+2) etc.
 If collision: Assume that the system is linear between tn and tn+1 and then
 solve the linear equation:











 On Fri, Mar 6, 2009 at 1:01 PM, Peter Verswyvelen bugf...@gmail.comwrote:

 Thanks for the info. With backtracking I actually meant the computation of
 the exact collision time, and let (part of the simulation) only go that far,
 so it's not really back tracking in the physics engine; does that
 correspond to your 2nd proposal. I just got this from a physics 
 bookhttp://www.amazon.com/Dynamic-Simulations-Multibody-Systems-Coutinho/dp/038795192Xthat
  implements it that way (at least that why I got from reading it
 diagonally, 

[Haskell-cafe] Re: Migration to QuickCheck 2.0

2009-03-06 Thread John Goerzen
John Goerzen wrote:
 Hi,
 
 My google skills must be faulty, because I can't find much stuff on
 migrating from QuickCheck 1.0 to 2.0.
 
 I've got a number of questions:
 
 What's the deal with Result and reason being in two different places
 in QuickCheck with two different definitions?
 
 All the QC.Config -- configMaxTest, defaultConfig, arguments, etc. are
 missing.  Are there direct replacements?

OK, ignore the second one.  That was an HUnit question; sorry.

 
 Thanks,
 
 -- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread Wolfgang Jeltsch
Am Freitag, 6. März 2009 14:34 schrieb Daniel Bünzli:
  without using recursive signal functions,

 If this is because there's this limitation in the frp system you use

It is.

 then better fix the system.

The system is Grapefruit, by the way. And I’m its developer, by the way. :-)  
I have to say that it’s hard, if not impossible, to combine recursive signal 
definitions with other features, I want to have.

The point of recursive definitions is that you want to turn recursive 
equations from your problem domain directly into Haskell definitions. This is 
nice from a user’s point of view but hard from a programmers point of view. 
Standard Haskell already shows that. While it is possible to define an 
infinite list of ones by

ones = 1 : ones,

it is not possible to do the same for sequences of type Seq. The above 
definition of ones relies very much on the structure of lists. Analogously, 
the ability to define signals recursively restricts the set of possible 
signal implementations seriously.

Haskell’s recursive definitions are very general. They have to find fixpoints 
of arbitrary functions over arbitrary type. Therefore, their semantics are 
rather weak. They give you the least defined fixpoint. The consequence is 
that you get bottom if you use something like

x = 0 * x

although x = 0 might be what you prefered.

What I want to say is that coming up with a signal implementation that allows 
Haskell recursion and has other advantages at the same time is a big 
challenge. There are three features, you might want from a signal 
implementation:

1. support for recursive signal definitions using Haskell equations

2. memoization (for avoiding time leaks, for example)

3. signals which may depend on external sources

I tried hard to support 2 and 3 well in Grapefruit. Fran has 1 but has 
problems with 2 and 3. I don’t know whether Reactive has a solution for 2 in 
the context of recursive definitions, i.e., whether the space and time leak 
problems of Fran were solved. I think, it has at least problems with 3.

For me, 2 and 3 are more important than 1. One reason is that 1 has other 
problems than being in conflict with 2 and 3. The result of a recursively 
defined signal depends on when sampling occurs. Since a recursive definition 
tends to result in accumulation of values, errors introduced by sampling may 
accumulate too. So you have to use clever approximation algorithms in 
addition.

My new idea is to view the problem in a different way. Let’s take the problem 
where the position of a ball is the integral of the difference between the 
mouse position and the ball position. As far as I can see, the transformation 
of the mouse position signal into the ball position signal forms a linear 
system. So the ball position signal is the mouse position signal convoluted 
with another signal.

If we would have a function that performes convolution, we could probably 
implement many problems using this function. Using convolution would let us 
get rid of the problems with accumulating errors.

I suppose that most of the recursive definitions you would use in FRP are 
differential or integral equations. Let’s look at the equation for the 
ball-following-the-mouse example:

ballPos = integral (mousePos - ballPos)

ballPos can be represented in terms of mousePos as follows:

ballPos = (exp . negate) * mousePos

where * means convolution. We could provide a library which supports common 
stuff like distance-dependent acceleration, friction etc.

Of course, you could say that now the programmer isn’t able anymore to enter 
his equations directly which isn’t nice. However, this situation is not 
different to other areas. For example, you cannot write

x = 5 - x / 2

and expect x to be 10 / 3. Instead, you have to transform the equation first.

 Soon or later it you'll want it elsewhere. A recursive reactive signal just
 means that some of what your reactive program computed will be
 usefull/necessary for the next step.

You can do this with convolution, I think.

By the way, continuous signals don’t have steps. These are just introduced by 
sampling.

 You'll see that happen very quickly (e.g. any simple reactive state
 machine).

“State machine” sounds like a discrete problem. You can use accumulation over 
event sequences here (as, for example, provided by the scan function in 
FRP.Grapefruit.Signal.Discrete).

By the way, the adress of the Grapefruit mailing list is 
grapefr...@projects.haskell.org, not grapefr...@haskell.org.

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics

2009-03-06 Thread Wolfgang Jeltsch
Am Freitag, 6. März 2009 17:51 schrieb Wolfgang Jeltsch:
 By the way, the adress of the Grapefruit mailing list is
 grapefr...@projects.haskell.org, not grapefr...@haskell.org.

Oh, this is really strange: I addressed my e-mail to 
grapefr...@projects.haskell.org but the version arriving at the Reactive 
mailing list has grapefr...@haskell.org in its To: header. However, my e-mail 
also reached the Grapefruit mailing list (but Daniel Bünzli’s didn’t) and the 
version there has the correct address in its To: headers.

Does anyone know who is responsible for the Haskell mail server?

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hint and Ambiguous Type issue

2009-03-06 Thread Joseph Fredette
Thanks so much, I think I understand. This definitely sounds like what I 
want to do. I guess I've got some learning to do...


Thats why I love Haskell so much, every other day it gives me something 
new to learn.


Thanks again,

/Joe

Daniel Gorín wrote:
I think you can achieve what you want but you need to use the correct 
types for it. Remember that when you write:


getFilterMainStuff :: Deliverable a = FilePath - Interpreter (Path, 
Filter a)


the proper way to read the signature is the caller of 
getFilterMainStuff is entitled to pick the type of a, as long as it 
picks an instance of Deliverable. Contrast this with a method 
declaration in Java where:


public Set getKeys()

is to be read: The invoked object may pick the type of the result, as 
long as it is a subclass of (or implements) Set.


When you say that you want to apply fMain to a (Config, Email) and 
get back a Deliverable a, I think you mean that fMain picks the type 
for a (and has to be an instance of Deliverable). There two ways to do 
this in Haskell:


1) You don't. If you know that your possible Deliverables are just 
FlatEmail and MaildirEmail, then the idiomatic way of doing this would 
be to turn Deliverable into an ADT:


data Deliverable = FlatEmail  | MaildirEmail  deriving (Typeable)
getFilterMainStuff :: FilePath - Interpreter (Path, Filter Deliverable)

2) Existential types. If, for some reason, you need your dynamic 
code to be able to define new deliverables, then you need to use 
the extension called existential types.


-- using GADT syntax
data SomeDeliverable where Wrap :: Deliverable a = a - SomeDeliverable

getFilterMainStuff :: FilePath - Interpreter (Path, Filter 
SomeDeliverable)


This basically resembles the contract of the Java world: if you run 
fMain you will get a value of type SomeDeliverable; you can 
pattern-match it and will get something whose actual type you don't 
know, but that it is an instance of class Deliverable.


See http://www.haskell.org/haskellwiki/Existential_type

Good luck!

Daniel

On Mar 6, 2009, at 2:33 AM, Joseph Fredette wrote:

Okay, I think I understand... I got so hung up thinking the error had 
to be in the Interpreter code, I didn't bother to look in the caller...


But every answer breeds another question... The practical reason for 
inferring fMain as being of type Deliverable a = Filter a, is to 
apply it (via runReader) to a (Config, Email) and get back a 
Deliverable a, then to use the deliverIO method on the result -- my 
question is, it appears I have to know the specific type of a in 
order to get the thing to typecheck, but in order to use it, I need 
to not know it...


Perhaps, in fact, I'm doing this wrong. Thanks for the help Daniel, 
everyone...


/Joe

Daniel Gorín wrote:

Ok, so I've pulled the latest version and the error I get now is:

Hackmain.hs:70:43:
   Ambiguous type variable `a' in the constraint:
 `Deliverable a'
   arising from a use of `getFilterMainStuff' at 
Hackmain.hs:70:43-60

   Probable fix: add a type signature that fixes these type variable(s)

Function getFilterMainStuff compiles just fine . The offending line 
is in buildConf and reads:


 (inboxL, fMain) - runUnsafeInterpreter . getFilterMainStuff $ 
filterMainL


The problem is that GHC can't figure out the type of fMain. It 
infers (Filter a), but doesn't know what is a and therefore how to 
build a proper dictionary to pass to getFilterMainStuff.


Observe that you would get a similar error message if you just defined:

 f = show . read

I can get it to compile by providing a type annotation for fMain:

 (inboxL, fMain) - runUnsafeInterpreter . getFilterMainStuff $ 
filterMainL

 let _ = fMain :: Filter MaildirEmail

So once you use fMain somewhere and GHC can infer it's type, 
everything should work fine.


Daniel

On Mar 5, 2009, at 11:26 PM, Joseph Fredette wrote:

Oh, crap- I must have never pushed the latest patches, I did put 
the typeable instances in all the appropriate places. And provided 
a (maybe incorrect? Though I'm fairly sure that shouldn't affect 
the bug I'm having now) Typeable implementation for Reader, but I 
still get this ambiguous type. I'll push the current version asap.


Thanks.

/Joe

Daniel Gorín wrote:

Hi

I've downloaded Hackmain from patch-tag, but I'm getting a 
different error. The error I get is:


Hackmain.hs:63:10:
  No instance for (Data.Typeable.Typeable2
 Control.Monad.Reader.Reader)
arising from a use of `interpret' at Hackmain.hs:63:10-67

Hint requires the interpreted values to be an instance of Typeable 
in order to check, in runtime, that the interpreted value matches 
the type declared at compile. Therefore, you need to make  sure 
that (Filter a) is indeed an instance of Typeable.


Since you have Filter a = Reader (Config, Email) a, you probably 
need to


- Derive Config and Email instances for Filter,

- Manually provide Typeable instances for Reader a b, something 
along the lines of:



Re: [Haskell-cafe] Re: MPI

2009-03-06 Thread Don Stewart
fft1976:
 On Wed, Mar 4, 2009 at 5:03 PM, FFT fft1...@gmail.com wrote:
  Are MPI bindings still the best way of using Haskell on Beowulf
  clusters? It's my feeling that the bindings stagnated, or are they
  just very mature?
 
 What's the story with distributed memory multiprocessing? Are Haskell
 programmers uninterested in it, or are things other than MPI used with
 it?

http://haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_parallelism#Distributed_Haskell
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: MPI

2009-03-06 Thread Bryan O'Sullivan
On Thu, Mar 5, 2009 at 10:43 AM, FFT fft1...@gmail.com wrote:


  Are MPI bindings still the best way of using Haskell on Beowulf
  clusters? It's my feeling that the bindings stagnated, or are they
  just very mature?


MPI itself hasn't changed in 14 years, so it's not exactly a moving target.
(There's an MPI 2.0, but its most visible changes are not really usable.)

What's the story with distributed memory multiprocessing? Are Haskell
 programmers uninterested in it, or are things other than MPI used with
 it?


The ratio of work to payoff is unfortunately very high, so it seems to have
been abandoned as a topic of fruitful research.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: MPI

2009-03-06 Thread David Leimbach
2009/3/6 Bryan O'Sullivan b...@serpentine.com

 On Thu, Mar 5, 2009 at 10:43 AM, FFT fft1...@gmail.com wrote:


  Are MPI bindings still the best way of using Haskell on Beowulf
  clusters? It's my feeling that the bindings stagnated, or are they
  just very mature?


 MPI itself hasn't changed in 14 years, so it's not exactly a moving target.
 (There's an MPI 2.0, but its most visible changes are not really usable.)


MPI forum meetings are ongoing now to update it once again :-)

Having implemented MPI 2, I find the comment that the visible changes not
being very usable to be interesting, and really more of an opinion (one that
I typically share for some parts of the API, but not others).




 What's the story with distributed memory multiprocessing? Are Haskell
 programmers uninterested in it, or are things other than MPI used with
 it?


 The ratio of work to payoff is unfortunately very high, so it seems to have
 been abandoned as a topic of fruitful research.


I think you're better off with some message passing system in almost all
cases than most when it comes to distributed, concurrent, and even some
kinds of parallel programs, but that's based on my real world experience
implementing efficient implementations of message passing for customers for
about 5 or 6 years so I'm a bit biased.





 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: MPI

2009-03-06 Thread Don Stewart
bos:
 On Thu, Mar 5, 2009 at 10:43 AM, FFT fft1...@gmail.com wrote:
 
 
  Are MPI bindings still the best way of using Haskell on Beowulf
  clusters? It's my feeling that the bindings stagnated, or are they
  just very mature?
 
 
 MPI itself hasn't changed in 14 years, so it's not exactly a moving target.
 (There's an MPI 2.0, but its most visible changes are not really usable.)
 
 
 What's the story with distributed memory multiprocessing? Are Haskell
 programmers uninterested in it, or are things other than MPI used with
 it?
 
 
 The ratio of work to payoff is unfortunately very high, so it seems to have
 been abandoned as a topic of fruitful research.


Though note the new paper for ICPP:

In this paper, we investigate the differences and tradeoffs
imposed by two parallel Haskell dialects running on multicore
machines. GpH and Eden are both constructed using the
highly-optimising sequential GHC compiler, and share thread
scheduling, and other elements, from a common code base. The GpH
implementation investigated here uses a physically-shared heap,
which should be well-suited to multicore architectures. In contrast,
the Eden implementation adopts an approach that has been designed
for use on distributed-memory parallel machines


http://www-fp.cs.st-and.ac.uk/~kh/mainICPP09.pdf

--   Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] do you have to use fix with forkio?

2009-03-06 Thread Luke Palmer
On Fri, Mar 6, 2009 at 1:48 AM, Daryoush Mehrtash dmehrt...@gmail.comwrote:

 Question:  Do I need to worry about space leak if I am using the fix to
 instead of the let?


If you need to worry about a space leak with fix, you need to worry about it
with let.

The reason arrows can tie the loop tighter is more about the nature of
recursion in streams; an arrow sees that prior values of a signal are not
used, whereas value recursion is much less restricted.  If, for example, the
arrow were a kleisli arrow over the list monad, this would not be possible.

With the definition fix f = let x = f x in x, you should not see any
performance difference, other than the standard HOF penalty if there is not
enough inlining... but that should not be asymptotic anyway.

Luke



 Thanks

 Daryoush
 2009/3/5 Luke Palmer lrpal...@gmail.com

 On Thu, Mar 5, 2009 at 6:27 PM, Donn Cave d...@avvanta.com wrote:

 Quoth Jonathan Cast jonathancc...@fastmail.fm:

  You can certainly use let:
 
reader - forkIO $ let loop = do
(nr', line) - readChan chan'
when (nr /= nr') $ hPutStrLn hdl line
loop
  in loop
 
  But the version with fix is clearer (at least to people who have fix in
  their vocabulary) and arguably better style.

 Would you mind presenting the better style argument?  To me, the
 above could not be clearer, so it seems like the version with fix
 could be only as clear, at best.


 I like using fix when it's simple rather than let, because it tells me the
 purpose of the binding.  eg., when I see

 let foo = ...

 Where ... is fairly long, I'm not sure what the purpose of foo is, or what
 its role is in the final computation.  It may not be used at all, or passed
 to some modifier function, or I don't know what.  Whereas with:

fix $ \foo - ...

 I know that whatever ... is, it is what is returne, and the purpose of foo
 is to use that return value in the expression itself.

 I know that it's a simple matter of scanning to the corresponding in,
 but let can be used for a lot of things, where as fix $ \foo is basically
 only for simple knot-tying.  Now, that doesn't say anything about the use of
 fix without an argument (passed to an HOF) or with a tuple as an argument or
 many other cases, which my brain has not chunked nearly as effectively.  I
 think fix is best with a single, named argument.

 Luke

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FRP + physics / status of hpysics

2009-03-06 Thread Roman Cheplyaka
* Peter Verswyvelen bugf...@gmail.com [2009-03-06 11:17:50+0100]
 Regarding hpysics, did anybody did some experiments with this?

Nothing I'm aware of.

 The blog
 seems to be inactive since december 2008; has development ceased?

Sort of. One reason is that DPH does not seem to be ready for hpysics
yet, another one is that I don't see any potential users around (read: I
just need a kick in the ass).

 Integrating hpysics with Grapefruit might be a good topic for the Hackaton,
 trying to make a simple game (e.g. Pong or Breakout) without using recursive
 signal functions, but with correct collision response and better-than-Euler
 integration, all handled by the physics engine. Other FRP engines could be
 tried, but Grapefruit hacking is already a topic on the Hackaton, so it
 would combine efforts.

Yes, I'm actively pondering Hpysics+Grapefruit (it's the primary reason
of my interest in Grapefruit). But first of all we need to get graphics
support into Grapefruit.

Does your proposal re Hackathon indicate that you'd like to join?

-- 
Roman I. Cheplyaka :: http://ro-che.info/
Don't let school get in the way of your education. - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Looking for a co-founder for a startup using Haskell

2009-03-06 Thread Ed McCaffrey
Hello,

I'm turning a project involving music into a startup, and I will be using
Haskell for most and possibly all of it.  I had an angel investor interested
until the market collapse forced him to turn his focus away from new
investments.  Other investors I have spoken with want me to contact them
again when it is further developed; basically, I think that I am optimistic
at the prospect of funding with the right team.

I am looking for one or more co-founders in the Boston, MA, area who would
be willing to work on this full-time in the near future, after funding is
secured, and in their spare time until then.  Since many investors are
currently moving towards larger investments in growth-stage companies, I am
going to apply to incubators who specialize in early-stage companies.

Anyone who wants more info send a reply to e...@edmccaffrey.net


Thanks,




Ed McCaffrey
e...@edmccaffrey.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for a co-founder for a startup using Haskell

2009-03-06 Thread Paul Johnson

Ed McCaffrey wrote:
Other investors I have spoken with want me to contact them again when 
it is further developed; 


That means no.  See 
http://blog.guykawasaki.com/2006/01/the_top_ten_lie.html


Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: MPI

2009-03-06 Thread FFT
On Fri, Mar 6, 2009 at 9:30 AM, Don Stewart d...@galois.com wrote:


 http://haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_parallelism#Distributed_Haskell


These are all Haskell-derived languages, not libraries, right?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] string type class

2009-03-06 Thread Duncan Coutts
On Fri, 2009-03-06 at 19:16 +, Chris Kuklewicz wrote:
 Matthew Pocock wrote:
  It seems every time I look at hackage there is yet another stringy 
  datatype. For lots of apps, the particular stringy datatype you use 
  matters for performance but not algorithmic reasons. Perhaps this is a 
  good time for someone to propose a stringy class?
 
 Not likely.
 
 I did define my own (private) class for regular expressions, to abstract over 
 String, the ByteStrings, and Seq Char.  But it is used in one place and is a 
 wart that should be removed.
 
 The simple task of looping over the contents of a String (once, forward) is 
 quite different from a Strict ByteString (using an index and a lookup).
 
 This means for decent efficiency I need two copies of my code, hand 
 specialized 
 to each case.
 
 tail or (x:xs) : very efficient for String (no allocation)
 tail or uncons : not efficient for ByteString (allocation, might as well 
 convert to [Char]

head/tail/uncons on a strict or lazy ByteString is perfectly ok. I would
recommend it over using length+index. If you are using tail in a tail
recursion then ghc can almost always optimise it to unbox the various
ByteString components and so there is no allocation of fresh BS
constructors, just adjusting offsets and lengths held in registers.

It's also possible to use unsafeHead and unsafeTail if you've checked
for null.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary stack overflow with Data.Sequence String

2009-03-06 Thread Gwern Branwen
On Thu, Mar 5, 2009 at 6:51 AM, Neil Mitchell ndmitch...@gmail.com wrote:
 Hi Gwern,

 I get String/Data.Binary issues too. My suggestion would be to change
 your strings to ByteString's, serisalise, and then do the reverse
 conversion when reading. Interestingly, a String and a ByteString have
 identical Data.Binary reps, but in my experiments converting,
 including the cost of BS.unpack, makes the reading substantially
 cheaper.

 Thanks

 Neil

Ah, thanks for the advice. Switching to (strict) ByteString seems to
resolve the stack overflow. (And thank goodness too, I need my
ireader!)

I hadn't realized it was the String that was messing things up and
being lazy. Very annoying! (The String code was cleaner - fewer
packs/unpacks.)

-- 
gwern
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Binary stack overflow with Data.Sequence String

2009-03-06 Thread Gwern Branwen
On Thu, Mar 5, 2009 at 2:55 PM, Neil Mitchell ndmitch...@gmail.com wrote:
 Avoid massive reductions in runtime while maintaining the same API?

 I did move to using ByteString's internally for those bits later on,
 but reading String's from Data.Binary with a ByteString+unpack went
 much more quickly than reading String's

 On Thu, Mar 5, 2009 at 7:35 PM, Don Stewart d...@galois.com wrote:
 Avoid unpack!

I wish I could use ByteStrings throughout and avoid packing/unpacking,
but unfortunately I don't control the Yi interfaces! If I want to
stick something in a buffer, String it is. This is where recent
discussions about Stringable classes certainly seem apropos - it would
be nice if we didn't have to do all this marshalling and converting by
hand, indeed.

-- 
gwern
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Test if a file is empty or stat in haskell

2009-03-06 Thread Anish Muttreja
On Thu, Mar 05, 2009 at 03:43:22PM -0500, Thomas DuBuisson wrote:
  getFileStatus, fileSize ...
 
  Great, thanks.
  BTW, Hoogle does not seem to  know about
  System.Posix.Files, though it did point me to System.IO.FileSize
  which would also have served the purpose.
 
 Yep, this was discussed in a Hoogle and Network.Socket thread I
 started last week.  There is a wiki for summarizing thoughts [1].
 Some time this weekend or next week I intend to submit a Hoogle ticket
 proposing what I see as an agreeable outcome.  At that point I hope
 Neil will accept and maybe even find time to write code.  The final
 step would involve me buying Neil a beer - I've lots of beer-debt to
 the Haskell community.
 
 [1] http://haskell.org/haskellwiki/Hoogle/Packages

Thanks Michael and Neil, for maintaining Hoogle. Hoogle is becoming a 
victim of it's own success. We depend on it and so want it to do 
everything :-).
I cannot edit the wiki page because creating new logins is currently 
disabled. My  only suggestion is that perhaps the Hoogle front page 
should have a discrete disclaimer about the current scope of indexed 
packages until a solution is found. It might be useful for someone new.

Anish

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Type question re: map

2009-03-06 Thread R J


Given the following (usual) definition of map:

map:: (a - b) - [a] - [b]
map f []   =  []
map f (x : xs) =  f x : map f xs

What's the type of map map?

GHCi's :t command reveals:

*Main :t map map
map map :: [a - b] - [[a] - [b]]

I'd be grateful if anyone could provide a systematic type calculation so that I 
can reason through more complicated examples myself.

Thanks.

_
Windows Live™: Life without walls.
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1a_explore_032009___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type question re: map

2009-03-06 Thread Luke Palmer
2009/3/6 R J rj248...@hotmail.com


 Given the following (usual) definition of map:

 map:: (a - b) - [a] - [b]

 What's the type of map map?


The definition is irrelevant, so I removed it.

To make it easier to reason about, I'm going to rename the second map to
map'.  It means the same thing, but this is just so we can talk about each
instance of it clearly.  Now I'm going to rename the free variables:

map :: (a - b) - [a] - [b]
map' :: (c - d) - [c] - [d]

- is right-associative, so I'll add the implied parentheses:

map :: (a - b) - ([a] - [b])
map' :: (c - d) - ([c] - [d])

Whenever we have an application like f x, if f has type a - b and x has
type a, then f x has type b.

So map map' says that (a - b) should be unified with the type of map', and
then the type of the result will be ([a] - [b]).  So the equation is:

a - b  =  (c - d) - ([c] - [d])

Which implies

a = c - d
b = [c] - [d]

That's as far as we can go with the unification, so the result will be [a]
- [b].  Substituting, we have [c - d] - [[c] - [d]].

Does that help?  What I have done is more-or-less the Hindley-Milner type
inference algorithm.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type question re: map

2009-03-06 Thread sam lee
map :: (a - b) - [a] - [b]

map takes a function and transforms a list of a's to b's.

map succ [1,2,3]
== [succ 1, succ 2, succ 3]
== [2, 3, 4]

In general,
map f :: [a] - [b]
where a is domain-type of f and b is image-type of f (f :: a - b).

map map [x, y, z]
== [map x, map y, map z]

so, x,y,z should functions of type (a - b).
and the result list, [map x, map y, map z], should have type [([a] - [b])]
because
map x :: [a] - [b]where x :: a - b



On 3/6/09, R J rj248...@hotmail.com wrote:


 Given the following (usual) definition of map:

 map:: (a - b) - [a] - [b]
 map f []   =  []
 map f (x : xs) =  f x : map f xs

 What's the type of map map?

 GHCi's :t command reveals:

 *Main :t map map
 map map :: [a - b] - [[a] - [b]]

 I'd be grateful if anyone could provide a systematic type calculation so
 that I can reason through more complicated examples myself.

 Thanks.

 _
 Windows Live™: Life without walls.
 http://windowslive.com/explore?ocid=TXT_TAGLM_WL_allup_1a_explore_032009
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Interesting problem from Bird (4.2.13)

2009-03-06 Thread wren ng thornton

Gleb Alexeyev wrote:
Here's my attempt though it's not really different from using built-in 
lists:


viewCL CatNil = Nothing
viewCL (Wrap a) = Just (a, CatNil)
viewCL (Cat a b) = case viewCL a of
 Nothing - viewCL b
 Just (x, xs) - Just (x, Cat xs b)


My solution was a refinement on this.

split = go id
where
go _ Nil  = Nothing
go k (Wrap a) = Just (a, k Nil)
go k (xs :++: ys) = case go ((ys :++:) . k) xs of
 Nothing - go k ys
 view- view

The trick is in the CPS instead of the direct approach of the original. 
In the original, if you have a strongly left-branching tree then you 
need to break the whole spine and you end up constructing another 
strongly left-branching spine. In this version we construct a 
right-branching spine instead, which allows future calls to be faster.



*Main inL[1..5]
(((Wrap 1 :++: Wrap 2) :++: Wrap 3) :++: Wrap 4) :++: Wrap 5

*Main viewCL $ inL[1..5]
Just (1,(((Nil :++: Wrap 2) :++: Wrap 3) :++: Wrap 4) :++: Wrap 5)

*Main split $ inL[1..5]
Just (1,Wrap 2 :++: (Wrap 3 :++: (Wrap 4 :++: Wrap 5)))

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe