Re: Aliasing current module qualifier

2014-09-30 Thread John Meacham
Yes, that is the semantics I use for recursive module imports in jhc. And
you are right in that it does not accept those examples due to being unable
to bootstrap the least fixed point.

How would the 'as M' proposal interact? Would it actually be new entries in
the name table or rewritten as a macro to the current module name? I can
see some edge cases where it makes a difference. I am thinking the easiest
would be to populate entries for all the M.toplevel names where toplevel
are the top level definitions of the module, will implement it and see how
it shakes out.

John

On Tue, Sep 30, 2014 at 5:18 AM, Iavor Diatchki iavor.diatc...@gmail.com
wrote:

 Hello,

 What semantics are you using for recursive modules?  As far as I see, if
 you take a least fixed point semantics (e.g. as described in A Formal
 Specification for the Haskell 98 Module System,
 http://yav.github.io/publications/modules98.pdf ) this program is
 incorrect as the module does not export anything.

 While this may seem a bit counter intuitive at first, this semantics has
 the benefit of being precise, easily specified, and uniform (e.g it does
 not require any special treatment of the  current  module).  As an
 example, consider the following variation of your program, where I just
 moved the definition in a sperate (still recursive) module:

 module A (M.x) where
   import B as M

 module B (M.x) where
   import A as M
   x = True

 I think that it'd be quite confusing if a single recursive module worked
 differently then a larger recursive group, but it is not at all obvious why
 B should export 'x'.  And for those who like this kind of puzzle: what
 should happen if 'A' also had a definition for 'x'?

 Iavor
  On Sep 29, 2014 11:02 PM, John Meacham j...@repetae.net wrote:

 You don't need a new language construct, what i do is:

  module AnnoyinglyLongModuleName (M.length, M.null) where

 import AnnoyinglongLongModuleName as M

 I think ghc would need to be extended a little to make this convienient
 as it doesn't handle recursive module imports as transparently.

 John

 On Mon, Sep 29, 2014 at 8:47 AM, Brandon Allbery allber...@gmail.com
 wrote:

 On Mon, Sep 29, 2014 at 4:19 AM, Herbert Valerio Riedel h...@gnu.org
 wrote:

 Now it'd be great if I could do the following instead:

 module AnnoyinglyLongModuleName (M.length, M.null) where

 import AnnoyinglyLongModuleName as M -- - does not work


 I think if I wanted this syntax, I'd go for:

 module AnnoyinglyLongModuleName as M where ...

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




 --
 John Meacham - http://notanumber.net/

 ___
 ghc-devs mailing list
 ghc-d...@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs




-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Aliasing current module qualifier

2014-09-29 Thread John Meacham
You don't need a new language construct, what i do is:

 module AnnoyinglyLongModuleName (M.length, M.null) where

import AnnoyinglongLongModuleName as M

I think ghc would need to be extended a little to make this convienient as
it doesn't handle recursive module imports as transparently.

John

On Mon, Sep 29, 2014 at 8:47 AM, Brandon Allbery allber...@gmail.com
wrote:

 On Mon, Sep 29, 2014 at 4:19 AM, Herbert Valerio Riedel h...@gnu.org
 wrote:

 Now it'd be great if I could do the following instead:

 module AnnoyinglyLongModuleName (M.length, M.null) where

 import AnnoyinglyLongModuleName as M -- - does not work


 I think if I wanted this syntax, I'd go for:

 module AnnoyinglyLongModuleName as M where ...

 --
 brandon s allbery kf8nh   sine nomine
 associates
 allber...@gmail.com
 ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonad
 http://sinenomine.net

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation?

2014-07-05 Thread John Meacham
Actually, I was looking into it a little, and template haskell could
effectively be implemented by a pre-processor and a portable library
that is compiler independent. If one could get ghc to spit out the
template haskell source after it expands it then that can be fed to
jhc as a quick first pass, but ideally the pre-processor TH would
create programs that can be run under the target compiler. that would
bring TH to every haskell compiler.

John

On Sat, Jul 5, 2014 at 10:38 AM, Brandon Allbery allber...@gmail.com wrote:
 On Sat, Jul 5, 2014 at 1:34 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:

 does JHC support template haskell?


 Pretty sure TH is too closely tied to ghc.

 --
 brandon s allbery kf8nh   sine nomine associates
 allber...@gmail.com  ballb...@sinenomine.net
 unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net



-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation?

2014-07-05 Thread John Meacham
The target compiler would have the TH libraries, which could be made
to be portable. The external program would just extract the TH bits
and turn them into a program that spits the TH expanded output to a
new file to compile, and repeat the process til no TH expansions exist
and finally that is the result you pass to the compiler.

 John

On Sat, Jul 5, 2014 at 1:09 PM, Luite Stegeman stege...@gmail.com wrote:
 How would you do reification with that approach?


 On Sat, Jul 5, 2014 at 9:59 PM, John Meacham j...@repetae.net wrote:

 Actually, I was looking into it a little, and template haskell could
 effectively be implemented by a pre-processor and a portable library
 that is compiler independent. If one could get ghc to spit out the
 template haskell source after it expands it then that can be fed to
 jhc as a quick first pass, but ideally the pre-processor TH would
 create programs that can be run under the target compiler. that would
 bring TH to every haskell compiler.

 John

 On Sat, Jul 5, 2014 at 10:38 AM, Brandon Allbery allber...@gmail.com
 wrote:
  On Sat, Jul 5, 2014 at 1:34 PM, Carter Schonwald
  carter.schonw...@gmail.com wrote:
 
  does JHC support template haskell?
 
 
  Pretty sure TH is too closely tied to ghc.
 
  --
  brandon s allbery kf8nh   sine nomine
  associates
  allber...@gmail.com
  ballb...@sinenomine.net
  unix, openafs, kerberos, infrastructure, xmonad
  http://sinenomine.net



 --
 John Meacham - http://notanumber.net/
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users





-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation?

2014-07-04 Thread John Meacham
Hmm.. It works on my nexus 4. Kiwamu of the metasepi
http://ajhc.metasepi.org/ is the one that uploaded the demo. Perhaps
he needs to update the key or something.

On Thu, Jul 3, 2014 at 9:43 PM, Dominick Samperi djsamp...@gmail.com wrote:
 Hello John,
 I tried to install the Haskell demo Cube on my Nexus 7
 and got: Error: package file was not signed correctly.
 D

 On Thu, Jul 3, 2014 at 4:47 PM, John Meacham j...@repetae.net wrote:
 In case anyone wanted to start writing haskell android code now, jhc
 fully supports android as a target. here is an app made with it

 https://play.google.com/store/apps/details?id=org.metasepi.ajhc.android.cube

 this was made with Kiwamu's ajhc branch but code has been merged back
 into the main tree.

 On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:
 This would probably be a great boon for those trying to use haskell for
 Android and IOS right? how might the emulation setup work for those?




 On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:

 wow, this is great work!

 If theres a clear path to getting the generic tooling into 7.10, i'm all
 for it :) (and willing to help on concrete mechanical subtasks)


 On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman stege...@gmail.com
 wrote:

 hi all,

 I've added some code [1] [2] to GHCJS to make it run Template Haskell
 code on node.js, rather than using the GHC linker. GHCJS has supported TH
 for a long time now, but so far always relied on native (host) code for 
 it.
 This is the main reason that GHCJS always builds native and JavaScript 
 code
 for everything (another is that Cabal Setup.hs scripts need to be compiled
 to some host-runnable form, but that can also be JavaScript if you have
 node.js)

 Now besides the compiler having to do twice the work, this has some other
 disadvantages:

 - Our JavaScript code has the same dependencies (packages) as native
 code, which means packages like unix or Win32 show up somewhere, depending
 on the host environment. This also limits our options in choosing
 JS-specific packages.
 - The Template Haskell code runs on the host environment, which might be
 slightly different from the target, for example in integer size or 
 operating
 system specific constants.

 Moreover, building native code made the GHCJS installation procedure more
 tricky, making end users think about libgmp or libiconv locations, since 
 it
 basically required the same preparation as building GHC from source. This
 change will make installing much easier and more reliable (we still have 
 to
 update the build scripts).

 How it works is pretty simple:

 - When any code needs to be run on the target (hscCompileCoreExpr,
 through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with
 the thrunner.js [3] script,
 - GHCJS sends its RTS and the Template Haskell server code [1] to
 node.js, the script starts a Haskell thread running the server,
 - for every splice, GHCJS compiles it to JavaScript and links it using
 its incremental linking functionality. The code for the splice, including
 dependencies that have not yet been sent to the runner (for earlier
 splices), is then sent in a RunTH [4] message,
 - the runner loads and runs the code in the Q monad, can send queries to
 GHCJS for reification,
 - the runner sends back the result as a serialized Template Haskell AST
 (using GHC.Generics for the Binary instances).

 All Template Haskell functionality is supported, including recent
 additions for reifying modules and annotations. I still need to clean up 
 and
 push the patches for the directory and process packages, but after that, 
 the
 TH code can read/write files, run processes and interact with them and 
 make
 network connections, all through node.js.

 Now since this approach is in no way specific to JavaScript, I was
 wondering if there's any interest in getting this functionality into GHC
 7.10 for general cross compilation. The runner would be a native (target)
 program with dynamic libraries (or object files) being sent over to the
 target machine (or emulator) for the splices.

 Thanks to Andras Slemmer from Prezi who helped build the initial proof of
 concept (without reification) at BudHac.

 cheers,

 Luite

 [1]
 https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs
 [2]
 https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs
 [3]
 https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js
 [4]
 https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell

Re: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation?

2014-07-03 Thread John Meacham
In case anyone wanted to start writing haskell android code now, jhc
fully supports android as a target. here is an app made with it

https://play.google.com/store/apps/details?id=org.metasepi.ajhc.android.cube

this was made with Kiwamu's ajhc branch but code has been merged back
into the main tree.

On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald
carter.schonw...@gmail.com wrote:
 This would probably be a great boon for those trying to use haskell for
 Android and IOS right? how might the emulation setup work for those?




 On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald
 carter.schonw...@gmail.com wrote:

 wow, this is great work!

 If theres a clear path to getting the generic tooling into 7.10, i'm all
 for it :) (and willing to help on concrete mechanical subtasks)


 On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman stege...@gmail.com
 wrote:

 hi all,

 I've added some code [1] [2] to GHCJS to make it run Template Haskell
 code on node.js, rather than using the GHC linker. GHCJS has supported TH
 for a long time now, but so far always relied on native (host) code for it.
 This is the main reason that GHCJS always builds native and JavaScript code
 for everything (another is that Cabal Setup.hs scripts need to be compiled
 to some host-runnable form, but that can also be JavaScript if you have
 node.js)

 Now besides the compiler having to do twice the work, this has some other
 disadvantages:

 - Our JavaScript code has the same dependencies (packages) as native
 code, which means packages like unix or Win32 show up somewhere, depending
 on the host environment. This also limits our options in choosing
 JS-specific packages.
 - The Template Haskell code runs on the host environment, which might be
 slightly different from the target, for example in integer size or operating
 system specific constants.

 Moreover, building native code made the GHCJS installation procedure more
 tricky, making end users think about libgmp or libiconv locations, since it
 basically required the same preparation as building GHC from source. This
 change will make installing much easier and more reliable (we still have to
 update the build scripts).

 How it works is pretty simple:

 - When any code needs to be run on the target (hscCompileCoreExpr,
 through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with
 the thrunner.js [3] script,
 - GHCJS sends its RTS and the Template Haskell server code [1] to
 node.js, the script starts a Haskell thread running the server,
 - for every splice, GHCJS compiles it to JavaScript and links it using
 its incremental linking functionality. The code for the splice, including
 dependencies that have not yet been sent to the runner (for earlier
 splices), is then sent in a RunTH [4] message,
 - the runner loads and runs the code in the Q monad, can send queries to
 GHCJS for reification,
 - the runner sends back the result as a serialized Template Haskell AST
 (using GHC.Generics for the Binary instances).

 All Template Haskell functionality is supported, including recent
 additions for reifying modules and annotations. I still need to clean up and
 push the patches for the directory and process packages, but after that, the
 TH code can read/write files, run processes and interact with them and make
 network connections, all through node.js.

 Now since this approach is in no way specific to JavaScript, I was
 wondering if there's any interest in getting this functionality into GHC
 7.10 for general cross compilation. The runner would be a native (target)
 program with dynamic libraries (or object files) being sent over to the
 target machine (or emulator) for the splices.

 Thanks to Andras Slemmer from Prezi who helped build the initial proof of
 concept (without reification) at BudHac.

 cheers,

 Luite

 [1]
 https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs
 [2]
 https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs
 [3]
 https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js
 [4]
 https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: Unicode primes and super/subscript characters in GHC

2014-06-27 Thread John Meacham
Yeah, I specifically excluded ascii prime (') from special handling in
jhc due to its already overloaded meaning in haskell. I just added the
subscript/superscript ones to the 'trailing' character class.

John

On Wed, Jun 25, 2014 at 12:54 PM, Mikhail Vorozhtsov
mikhail.vorozht...@gmail.com wrote:
 Isn't it weird that you can't write `a₁'`? I was considering proposing

 varid - (small { small | large | digit | ' | primes } { subsup | primes })
 (EXCEPT reservedid)

 but felt that it would be odd to allow primes in the middle of an identifier
 but not super/subscripts. I wish we could just abandon things like `a'bc'd`
 altogether...


 On 06/15/2014 03:58 AM, John Meacham wrote:

 I have this feature in jhc, where I have a 'trailing' character class
 that can appear at the end of both symbols and ids.

 currently it consists of

   $trailing = [₀₁₂₃₄₅₆₇₈₉⁰¹²³⁴⁵⁶⁷⁸⁹₍₎⁽⁾₊₋]

   John

 On Sat, Jun 14, 2014 at 7:48 AM, Mikhail Vorozhtsov
 mikhail.vorozht...@gmail.com wrote:

 Hello lists,

 As some of you may know, GHC's support for Unicode characters in lexemes
 is
 rather crude and hence prone to inconsistencies in their handling versus
 the
 ASCII counterparts. For example, APOSTROPHE is treated differently from
 PRIME:

 λ data a +' b = Plus a b
 interactive:3:9:
  Unexpected type ‘b’
  In the data declaration for ‘+’
  A data declaration should have form
data + a b c = ...
 λ data a +′ b = Plus a b

 λ let a' = 1
 λ let a′ = 1
 interactive:10:8: parse error on input ‘=’

 Also some rather bizarre looking things are accepted:

 λ let ᵤxᵤy = 1

 In the spirit of improving things little by little I would like to
 propose:

 1. Handle single/double/triple/quadruple Unicode PRIMEs the same way as
 APOSTROPHE, meaning the following alterations to the lexer:

 primes - U+2032 | U+2033 | U+2034 | U+2057
 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes)
 graphic - small | large | symbol | digit | special |  | ' | primes
 varid - (small { small | large | digit | ' | primes }) (EXCEPT
 reservedid)
 conid - large { small | large | digit | ' | primes }

 2. Introduce a new lexer nonterminal subsup that would include the
 Unicode
 sub/superscript[1] versions of numbers, -, +, =, (, ), Latin
 and
 Greek letters. And allow these characters to be used in names and
 operators:

 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes |
 subsup )
 digit - ascDigit | uniDigit (EXCEPT subsup)
 small - ascSmall | uniSmall (EXCEPT subsup) | _
 large - ascLarge | uniLarge (EXCEPT subsup)
 graphic - small | large | symbol | digit | special |  | ' | primes |
 subsup
 varid - (small { small | large | digit | ' | primes | subsup }) (EXCEPT
 reservedid)
 conid - large { small | large | digit | ' | primes | subsup }
 varsym - (symbol (EXCEPT :) {symbol | subsup}) (EXCEPT reservedop |
 dashes)
 consym - (: {symbol | subsup}) (EXCEPT reservedop)

 If this proposal is received favorably, I'll write a patch for GHC based
 on
 my previous stab at the problem[2].

 P.S. I'm CC-ing Cafe for extra attention, but please keep the discussion
 to
 the GHC users list.

 [1] https://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
 [2] https://ghc.haskell.org/trac/ghc/ticket/5108
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users







-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: Unicode primes and super/subscript characters in GHC

2014-06-17 Thread John Meacham
Don't forget that for every line of haskell code on hackage there are
dozens of lines used internally within organizations where
compatibility beyond their target internal tools may not be a concern.
Deciding on a policy of allowing primes or whatnot within an
organization seems quite plausible and doesn't entail CPP concerns.

John

On Sun, Jun 15, 2014 at 5:26 PM, Mateusz Kowalczyk
fuuze...@fuuzetsu.co.uk wrote:
 On 06/14/2014 04:48 PM, Mikhail Vorozhtsov wrote:
 Hello lists,

 As some of you may know, GHC's support for Unicode characters in lexemes
 is rather crude and hence prone to inconsistencies in their handling
 versus the ASCII counterparts. For example, APOSTROPHE is treated
 differently from PRIME:

 λ data a +' b = Plus a b
 interactive:3:9:
  Unexpected type ‘b’
  In the data declaration for ‘+’
  A data declaration should have form
data + a b c = ...
 λ data a +′ b = Plus a b

 λ let a' = 1
 λ let a′ = 1
 interactive:10:8: parse error on input ‘=’

 Also some rather bizarre looking things are accepted:

 λ let ᵤxᵤy = 1

 In the spirit of improving things little by little I would like to propose:

 1. Handle single/double/triple/quadruple Unicode PRIMEs the same way as
 APOSTROPHE, meaning the following alterations to the lexer:

 primes - U+2032 | U+2033 | U+2034 | U+2057
 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes)
 graphic - small | large | symbol | digit | special |  | ' | primes
 varid - (small { small | large | digit | ' | primes }) (EXCEPT reservedid)
 conid - large { small | large | digit | ' | primes }

 2. Introduce a new lexer nonterminal subsup that would include the
 Unicode sub/superscript[1] versions of numbers, -, +, =, (, ),
 Latin and Greek letters. And allow these characters to be used in names
 and operators:

 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes |
 subsup )
 digit - ascDigit | uniDigit (EXCEPT subsup)
 small - ascSmall | uniSmall (EXCEPT subsup) | _
 large - ascLarge | uniLarge (EXCEPT subsup)
 graphic - small | large | symbol | digit | special |  | ' | primes |
 subsup
 varid - (small { small | large | digit | ' | primes | subsup }) (EXCEPT
 reservedid)
 conid - large { small | large | digit | ' | primes | subsup }
 varsym - (symbol (EXCEPT :) {symbol | subsup}) (EXCEPT reservedop | dashes)
 consym - (: {symbol | subsup}) (EXCEPT reservedop)

 If this proposal is received favorably, I'll write a patch for GHC based
 on my previous stab at the problem[2].

 P.S. I'm CC-ing Cafe for extra attention, but please keep the discussion
 to the GHC users list.

 [1] https://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
 [2] https://ghc.haskell.org/trac/ghc/ticket/5108
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


 While personally I like the proposal (wanted prime and sub/sup scripts
 way too many times), I worry what this means for compatibility reasons:
 suddenly we'll have code that fails to build on 7.8 and before because
 someone using 7.9/7.10+ used ′ somewhere. Even using CPP based on
 version of the compiler used is not too great in this scenario because
 it doesn't bring significant practical advantage to justify the CPP
 clutter in code. If the choice is either extra lines due to CPP or using
 ‘'’ instead of ‘′’, I know which I'll go for.

 I also worry (although not based on anything particular you said)
 whether this will not change meaning of any existing programs. Does it
 only allow new programs?

 Will it be enabled by a pragma?

 I simply worry about how practical it will be to use for actual programs
 and libraries that will go out on Hackage and wider world, even if it is
 accepted.

 --
 Mateusz K.
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: Unicode primes and super/subscript characters in GHC

2014-06-14 Thread John Meacham
I have this feature in jhc, where I have a 'trailing' character class
that can appear at the end of both symbols and ids.

currently it consists of

 $trailing = [₀₁₂₃₄₅₆₇₈₉⁰¹²³⁴⁵⁶⁷⁸⁹₍₎⁽⁾₊₋]

 John

On Sat, Jun 14, 2014 at 7:48 AM, Mikhail Vorozhtsov
mikhail.vorozht...@gmail.com wrote:
 Hello lists,

 As some of you may know, GHC's support for Unicode characters in lexemes is
 rather crude and hence prone to inconsistencies in their handling versus the
 ASCII counterparts. For example, APOSTROPHE is treated differently from
 PRIME:

 λ data a +' b = Plus a b
 interactive:3:9:
 Unexpected type ‘b’
 In the data declaration for ‘+’
 A data declaration should have form
   data + a b c = ...
 λ data a +′ b = Plus a b

 λ let a' = 1
 λ let a′ = 1
 interactive:10:8: parse error on input ‘=’

 Also some rather bizarre looking things are accepted:

 λ let ᵤxᵤy = 1

 In the spirit of improving things little by little I would like to propose:

 1. Handle single/double/triple/quadruple Unicode PRIMEs the same way as
 APOSTROPHE, meaning the following alterations to the lexer:

 primes - U+2032 | U+2033 | U+2034 | U+2057
 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes)
 graphic - small | large | symbol | digit | special |  | ' | primes
 varid - (small { small | large | digit | ' | primes }) (EXCEPT reservedid)
 conid - large { small | large | digit | ' | primes }

 2. Introduce a new lexer nonterminal subsup that would include the Unicode
 sub/superscript[1] versions of numbers, -, +, =, (, ), Latin and
 Greek letters. And allow these characters to be used in names and operators:

 symbol - ascSymbol | uniSymbol (EXCEPT special | _ |  | ' | primes |
 subsup )
 digit - ascDigit | uniDigit (EXCEPT subsup)
 small - ascSmall | uniSmall (EXCEPT subsup) | _
 large - ascLarge | uniLarge (EXCEPT subsup)
 graphic - small | large | symbol | digit | special |  | ' | primes |
 subsup
 varid - (small { small | large | digit | ' | primes | subsup }) (EXCEPT
 reservedid)
 conid - large { small | large | digit | ' | primes | subsup }
 varsym - (symbol (EXCEPT :) {symbol | subsup}) (EXCEPT reservedop | dashes)
 consym - (: {symbol | subsup}) (EXCEPT reservedop)

 If this proposal is received favorably, I'll write a patch for GHC based on
 my previous stab at the problem[2].

 P.S. I'm CC-ing Cafe for extra attention, but please keep the discussion to
 the GHC users list.

 [1] https://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
 [2] https://ghc.haskell.org/trac/ghc/ticket/5108
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: AlternateLayoutRule

2014-06-10 Thread John Meacham
After some work, I have replaced jhcs front end with one using an
AlternateLayoutRule and it passes ghc's parsing regression tests. I
had to make some modifications so the code so it isn't as pretty as
I'd like, I'll see if I can distill the essence out into something
concrete.

Incidentally, I found a particularly troublesome case for the current
ghc rule out in the wild during my testing:

-- Lining up |'s and indenting new blocks 4 spaces. A fairly common practice.
bar x y = case x of
Just z | y  z*2 - case y of
_ - y
   | y  z - y*10
_  - 1

bar (Just 2) 3  = 30

--add a constant true guard that should be a nop
bar x y = case x of
Just z | y  z*2 - case y of
_ | True - y  === added a no-op guard
   | y  z - y*10
_  - 1

bar (Just 2) 3  = 1

The pattern match was fairly complicated making what happened pretty obscure.

One of the main benefits I have found is being able to process the
token stream between the lexer and parser, which drastically
simplified my parser making it almost shift/reduce conflict free. I
turn (+) and  case into single lexemes and turned reservedids into
varids for the keywords of unenabled extensions.

John

-- John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] PROPOSAL: Record field type inference

2014-06-04 Thread John Meacham
This is also available as html at

http://repetae.net/computer/jhc/record_inference.html

Record Type Inference
=

An extension to the named field mechanism that will greatly enhance the
utility of them when combined with the existing `DisambiguateRecordFields`,
`RecordPuns`, and `RecordWildCards`.

The proposal is to allow the types of record fields to be inferred by the
normal type inference engine. It would look like

 {.haskell}
data Rec = Rec {fieldA,fieldB,fieldC}

f Rec { .. } = Rec { .. } where
fieldA = True
fieldB = 4


This would infer the types `Bool`, `Int`, and `forall a . a` for the fields of
the record constructor and `f :: Rec - Rec` for f. For the purposes of type
checking the fields are treated as monomorphic and not generalized but
defaulted like normal after typechecking the module. Other than infering the
types of the record fields, the records have the normal syntax. The extensions
`RecordPuns`, `RecordWildCards` and `DisambiguateRecordFields` will be enabled
when record field inference is enabled.

Selector functions will not be created for infered records, as in, the names
are field labels and not functions. This means they do not share a namespace
with functions and do not conflict with each other. Multiple records may have
the same field names in the same module. This means the following is fine.

 {.haskell}
data Rec1 = Rec1 {input, withFoo, withoutFoo }
data Rec2 = Rec2 {input, withBar, withoutBar }

f Rec1 { .. } = case input of
[] - Rec1 { .. }
(x:xs) - if hasFoo x
then Rec1 { withFoo = x:withFoo, .. }
else Rec1 { withoutFoo = x:withoutFoo, .. }


Possible extensions
---

### as-pattern disambiguation

In order to make the disambiguation of record fields more useful without
relying on the type checker for disambiguation, We can declare that variables
explicitly bound to a constsructor in a pattern match use that constructor to
disambiguate fields for operations on the variable. This is a purely syntactic
transformation that can happen before typechecking. It can be used as follows.

 {.haskell}
-- use the input bound by a Rec1 to update the input bound by a Rec2
f r1@Rec1 { input } r2@Rec2 {} = case input of
xs | any hasBar xs = f r1 { input = [] } r2 { input }


### Field label inference

It is concievable that we may want to infer the fields themselves of a record,
as in:

 {.haskell}
-- infer that R has the field labels bob and susan
data R = R { ..}
f x@R {bob} = R {susan = bob}


In order to implement this, a pass through the file will collect every field
label that is used with an explicit R constructor and treat the record as if
it were declared with those names as infered fields.
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] [Haskell-cafe] PROPOSAL: Record field type inference

2014-06-04 Thread John Meacham
Yeah, I am familier with that, this is fairly orthogonal actually and
both can be used together to good effect.

This proposal is more about making records 'lightweight', as in, I
have a recursive function that takes 5 or so recursive values and they
all have to be passed as independent options (or a tuple) right now
which is error prone, by making a lightweight inferred record we get
named parameters that are infered just like they were listed as
options and CPR analysis will even ensure there is no run time
penalty, the fields will be expanded out as positional parameters
internally.

An issue with OverloadedRecordFields is that it relies on a lot of
type system complexity, This builds on the existing
DisambiguateRecordFields extension and simply removes the restriction
that the records with the same field names be defined in different
modules. since field names declared this way don't share a namespace
with functions this isn't an issue.

Notably,  everything to disambiguate fields can take place in the
renamer and the fields can be typechecked exactly as if they were top
level pattern bindings.

Another possible extension is allowing it to infer scoped type
variables as well for a parameterized record type. working on
something concrete for that with my jhc implementation, the issue is
that the record will have to be parameterized by a type to keep them
from escaping. Perhaps just a single type parameter that is
existentially instantiated, syntax being
data R t = R {fa,fb,fc}
or putting the data declaration in a local where or let binding. but
that will require some more work. (and scoped type variables in jhc
are a little iffy at the moment as is)

John

On Tue, Jun 3, 2014 at 11:33 PM, Carter Schonwald
carter.schonw...@gmail.com wrote:
 Hey John
 in some respects, this sounds like syntax sugar around
 https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields
 have you had a look at that? (its not quite merged into HEAD yet, but that
 due soon i'm told)


 On Wed, Jun 4, 2014 at 2:26 AM, John Meacham j...@repetae.net wrote:

 This is also available as html at

 http://repetae.net/computer/jhc/record_inference.html

 Record Type Inference
 =

 An extension to the named field mechanism that will greatly enhance the
 utility of them when combined with the existing
 `DisambiguateRecordFields`,
 `RecordPuns`, and `RecordWildCards`.

 The proposal is to allow the types of record fields to be inferred by the
 normal type inference engine. It would look like

  {.haskell}
 data Rec = Rec {fieldA,fieldB,fieldC}

 f Rec { .. } = Rec { .. } where
 fieldA = True
 fieldB = 4
 

 This would infer the types `Bool`, `Int`, and `forall a . a` for the
 fields of
 the record constructor and `f :: Rec - Rec` for f. For the purposes of
 type
 checking the fields are treated as monomorphic and not generalized but
 defaulted like normal after typechecking the module. Other than infering
 the
 types of the record fields, the records have the normal syntax. The
 extensions
 `RecordPuns`, `RecordWildCards` and `DisambiguateRecordFields` will be
 enabled
 when record field inference is enabled.

 Selector functions will not be created for infered records, as in, the
 names
 are field labels and not functions. This means they do not share a
 namespace
 with functions and do not conflict with each other. Multiple records may
 have
 the same field names in the same module. This means the following is fine.

  {.haskell}
 data Rec1 = Rec1 {input, withFoo, withoutFoo }
 data Rec2 = Rec2 {input, withBar, withoutBar }

 f Rec1 { .. } = case input of
 [] - Rec1 { .. }
 (x:xs) - if hasFoo x
 then Rec1 { withFoo = x:withFoo, .. }
 else Rec1 { withoutFoo = x:withoutFoo, .. }
 

 Possible extensions
 ---

 ### as-pattern disambiguation

 In order to make the disambiguation of record fields more useful without
 relying on the type checker for disambiguation, We can declare that
 variables
 explicitly bound to a constsructor in a pattern match use that constructor
 to
 disambiguate fields for operations on the variable. This is a purely
 syntactic
 transformation that can happen before typechecking. It can be used as
 follows.

  {.haskell}
 -- use the input bound by a Rec1 to update the input bound by a Rec2
 f r1@Rec1 { input } r2@Rec2 {} = case input of
 xs | any hasBar xs = f r1 { input = [] } r2 { input }
 

 ### Field label inference

 It is concievable that we may want to infer the fields themselves of a
 record,
 as in:

  {.haskell}
 -- infer that R has the field labels bob and susan
 data R = R { ..}
 f x@R {bob} = R {susan = bob}
 

 In order to implement this, a pass through the file will collect every
 field
 label that is used with an explicit R constructor and treat the record as
 if
 it were declared with those names as infered fields.
 ___
 Haskell-Cafe

Command line options for forcing 98 and 2010 mode

2014-06-03 Thread John Meacham
Something that would be useful is command line options that will make
ghc conform to the standards in one shot. I have various programs that
conform to the standards and the onlyrce of incompatibility is the
changing options needed to get ghc to compile them. This is analogous
to how hugs has the -98 flag and the C standard specifies that 'c99'
is an alias to your C compiler that enforces full ISO c99
compatibility.

I was thinking providing ghc98 and ghc2010 command aliases, or making
an option '--std=haskell98'.

another option would be to have -XHaskell98 also modify the package
imports in addition to language features, but this may be undesireable
as then it would behave differently when specified in a LANGUAGE
pragma.

 John

-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] ANNOUNCE: jhc 0.8.2

2014-06-02 Thread John Meacham
Lots of internal improvements in this one, so may be of more interest
to developers

http://repetae.net/computer/jhc/

- completely replaced the front end parser/lexer with a new more
flexible one. My rewritten one is about 3,250 lines shorter.

- The new one uses a general precedence parser for application as well
as infix operators, allowing consistent handling of bang, irrefutable,
and @ patterns and allows prefix and infix operators with a higher
precedence than application to be defined. Incidentally, this allows
for ~ and @ in expressions. (~) = negate (prefixr 11) gives us a nice
strongly binding negative and @ (infixr 12) is perfect for type
directed name resolution without further overloading the humble dot.
they parse exactly the same as they do in patterns so no new magic is
being added to the symbols other than allowing them in expressions.

- Layout algorithm is fully in the lexer now, no lexer/parser
interaction needed and useful transformations happen between the two
simplifying the parser.

- updated a lot of the external libraries to new versions

- improvements to the build based on feedback, now gives much better
error messages about what went wrong.

- include an experimental utility to automatically download 3rd party
packages from hackage and install them, it has some kinks, but even if
it can't do it automatically, it provides a template for easily
modifying the package to work.

- removed the last of the cabal-cruft from the library build system.

- fixed a lot of bugs relating to errors not being caught early by jhc
so they produce a cryptic message from the compiler later, attach
source location to more errors for better reporting.

- ghc 7.8 compatibilty contributed by others. (thanks!)

- infix type constructor extension supported

- field punning extension now supported

- new option -fno-sugar to turn off all desugaring that would
introduce any hidden dependencies, all literals are unboxed under it.
WYSIWYG.

- unboxed kinds now more general, work in more places

- record field names may overlap

- more efficient binary representation using LEB128

- Data.String added with mild magic.

-- 
John Meacham - http://notanumber.net/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: RFC: changes to -i flag for finding source files

2014-05-30 Thread John Meacham
JHC has the feature that

Graphics.UI.GTK.Button can live in any of:

Graphics/UI/GTK/Button.hs
Graphics/UI/GTK.Button.hs
Graphics/UI.GTK.Button.hs
Graphics.UI.GTK.Button.hs

It lets you have deep module hiarchies without deep directory
hierarchies and is not terribly surprising as behaviors go.

 John

On Fri, Apr 25, 2014 at 6:17 AM, Simon Marlow marlo...@gmail.com wrote:
 I want to propose a simple change to the -i flag for finding source files.
 The problem we often have is that when you're writing code for a library
 that lives deep in the module hierarchy, you end up needing a deep directory
 structure, e.g.

  src/
Graphics/
  UI/
   Gtk/
 Button.hs
 Label.hs
 ...

 where the top few layers are all empty.  There have been proposals of
 elaborate solutions for this in the past (module grafting etc.), but I want
 to propose something really simple that would avoid this problem with
 minimal additional complexity:

   ghc -iGraphics.UI.Gtk=src

 the meaning of this flag is that when searching for modules, ghc will look
 for the module Graphics.UI.Gtk.Button in src/Button.hs, rather than
 src/Graphics/UI/Gtk/Button.hs.  The source file itself is unchanged: it
 still begins with module Graphics.UI.Gtk.Button 

 The implementation is only a few lines in the Finder (and probably rather
 more in the manual and testsuite), but I wanted to get a sense of whether
 people thought this would be a good idea, or if there's a better alternative
 before I push it.

 Pros:

   - simple implementation (but Cabal needs mods, see below)
   - solves the deep directory problem

 Cons:

   - It makes the rules about finding files a bit more complicated.
 People need to find source files too, not just compilers.
   - packages that want to be compatible with older compilers can't
 use it yet.
   - you can't use '=' in a source directory name (but we could pick
 a different syntax if necessary)
   - It won't work for Cabal packages until Cabal is modified to
 support it (PreProcess and SrcDist and perhaps Haddock are the only
 places affected, I think)
   - Hackage will need to reject packages that use this feature without
 also specifying ghc = 7.10 and some cabal-version too.
   - Are there other tools/libraries that will need changes? Leksah?

 Cheers,
 Simon
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: changes to -i flag for finding source files

2014-05-30 Thread John Meacham
On Fri, May 30, 2014 at 2:45 AM, Daniel Trstenjak
daniel.trsten...@gmail.com wrote:
 Well, it might not be terribly surprising in itself, but we
 just have quite complex systems and the not terribly surprising
 things just accumulate and then it might get surprising somewhere.

 I really prefer simplicity and explicitly.

 If a central tool like GHC adds this behaviour, then all other
 tools are forced to follow.


Well, I just proposed it as an alternative to some of the other ideas
floated here. A command line flag like -i would definitely be ghc
specific and inherently non-portable.

John

-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: RFC: changes to -i flag for finding source files

2014-05-30 Thread John Meacham
No, it would be trivial to make it do so, but it would be ususual and
contrary to how ghc does things.

For instance, ghc doesnt warn if both Foo.lhs and Foo.hs exist or
src/Foo.hs and bar/Foo.hs when both -isrc and -ibar are specified on
the command line.

   John

On Fri, May 30, 2014 at 3:10 AM, Herbert Valerio Riedel h...@gnu.org wrote:
 On 2014-05-30 at 11:00:38 +0200, John Meacham wrote:
 JHC has the feature that

 Graphics.UI.GTK.Button can live in any of:

 Graphics/UI/GTK/Button.hs
 Graphics/UI/GTK.Button.hs
 Graphics/UI.GTK.Button.hs
 Graphics.UI.GTK.Button.hs

 Just wondering: Does JHC warn if, for instance,
 `Graphics/UI/GTK/Button.hs` as well as `Graphics.UI.GTK.Button.hs` exists?



-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: AlternateLayoutRule

2014-05-14 Thread John Meacham
Okay, I believe I have come up with a modified version that accepts many more
programs and doesn't require complicated comma handling, you can make all
decisions based on the top of the context stack. It also allows many useful
layouts that were illegal under the old system.

The main change was to store the expected closing token in addition to the
opening one on the context stack, then look up the appropriate token in a
table based on the immediately enclosing context. Unlike the old version
that recursed up the stack, this only needs to look at the top.

The rules are (where _ means true everywhere)

always enabled:
(case,of)
(if,then)
(then,else)
('(',')')
('[',']')
('{','}')
conditional:
of = ('|','-')
let = ('|','=')
'[' = ('|',']')

then we follow the same rules as before, except we check the token against
the stored ending token at the top of the stack and 'let' is only terminated
by ',' if '|' is at the top of the context stack.

In addition I added a list of 'layout insensitive' symbols, that will never
poke the layout rule, they are [|,-,=,;,,]. lines starting with
these will not continue or close layout contexts.

So, there are still a couple obscure cases this won't lay out properly, but
it does allow some handy things that were not allowed before.

it boils down to the fact that layout is turned off inside any enclosing
brackets and effects do not propagate outside of them. So nothing done
between [ .. ] can affect anything outside of them.

some handy things this allows

-- a 'cond' construct that lines up nicely.
foo = case () of ()
  | cond1 - exp1
  | cond2 - exp2
  | cond3 - exp3

line up your tuple defs

(alpha
,beta
,gamma) = foo

no need to double indent conditionals in do let

let f x y z
| x == z = y
| x == y = z
| y == z = x


The simple rule is, inside enclosing brackets, layout is turned off and
doesn't propagate. I am testing the rules with ghc and they are promising so
far. pattern bindings

John
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: AlternateLayoutRule

2014-05-14 Thread John Meacham
Actually, it has less cases than my previous version, I think I just
wasn't presenting it well. My goal is to make something that will
accept current programs out there _and_ be much simpler than the
current rule. The parse exception brings a huge amount of complexity
into the old rule. LALR parsers are notoriously tricky for even
experienced programmers.

So, compared to the H98 rule the major simplification is:

Rather than extend layout until the language parser encounters an
error, I proactively disable layout when inserting a semi or brace
would guarentee a parse error under a simple parsing algorithm that
only keeps track of 'nesting level' and nothing else.

So, we basically just keep track of when inserting a semi or close
brace would cause an error of the form '(' with no matching ')' found
and don't do it then.

which state your automata goes to next depends on simple pattern
matching on current symbol and current top of stack and can be listed
as a set of tuples.

in other words, a textbook deterministic push down automaton.

John



On Wed, May 14, 2014 at 7:29 AM, Ian Lynagh ig...@earth.li wrote:
 On Tue, May 13, 2014 at 11:20:25PM -0700, John Meacham wrote:
 Okay, I believe I have come up with a modified version that accepts many more
 programs and doesn't require complicated comma handling, you can make all
 decisions based on the top of the context stack. It also allows many useful
 layouts that were illegal under the old system.

 From your description, the rule still sounds quite complex. It depends
 whether the objective is a rule that is close enough to the current rule
 that we can switch with very little code breaking, or a rule that is
 simple to explain to newcomers to the language (but which will require
 much more code tweaking when updating packages to compile with the new
 standard).


 Thanks
 Ian




-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


AlternateLayoutRule

2014-05-13 Thread John Meacham
Hi, I noticed that ghc now supports an 'AlternateLayoutRule' but am
having trouble finding information about it. Is it based on my
proposal and sample implementation?
http://www.mail-archive.com/haskell-prime@haskell.org/msg01938.html

https://ghc.haskell.org/trac/haskell-prime/wiki/AlternativeLayoutRule
implies it has been in use since 6.13. If that is the case, I assume
it has been found stable?

I ask because I was going to rewrite the jhc lexer and would like to
use the new mechanism in a way that is compatible with ghc. If it is
already using my code, so much the better.

John

-- 
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: AlternateLayoutRule

2014-05-13 Thread John Meacham
On Tue, May 13, 2014 at 2:22 PM, Ian Lynagh ig...@earth.li wrote:
 It's based on your code, but I had to essentially completely re-write it
 to work with the way that GHC's parser works; I don't think sharing the
 code will be feasible.

Ah, yeah, I didn't think the code would translate directly, but I'd want the
logic to match of course.

But on that subject of sharing code, I just relicensed jhc under a
permissive license like ghc with the last release so code can move both
ways. I mainly think it will be useful in the libraryies and support
routines rather than compiler core stuff. though, I may nab the c--
parser/emitter. I already use it as an intermediate language but with no
external representation.

My lexer/parser is based on a branch of the thih
source which also eventually begat haskell-src but it is fairly hairy.
Written before monad syntax so has hand written CPS everywhere. I am hoping
that if the new layout rule works I can move to a packrat or hybrid
recursive descent parser, letting LALR handle the gross structure but use
hand written recursive descent with excellent error messages for
expressions. The only reason I have stuck it out with the happy one for so
long is it has the magic lexer/layout interaction baked in.

 I also fixed some bugs (e.g. I think that with your code,
 Â  Â  foo = let { x = x }
 Â  Â  Â  Â  Â  in x
 turns into something like
 Â  Â  foo = let { x = x }
 Â  Â  Â  Â  Â  } in x
 ) and made some tweaks after trying it on GHC+bootlibs, but I don't have
 details to hand.

ah cool, can you point me to which file it is implemented in in the source
so I can copy your new rules?

 However, the consensus was that the new rule has too many cases, due to
 trying to match the old rule as closely as possible, and despite that it
 doesn't have the advantage that it is a drop-in replacement. ISTR Cabal
 in particular needed several changes to compile with it
 (0aba7b9f2e5d8acea156d575184a4a63af0a1ed3). Most of them were code of
 the form
 Â  Â  case e of
 Â  Â  p - e'
 Â  Â  where bs
 needing the 'where' clause to be less indented.

Hmm.. interesting. I only tested against the whole nofib suite originally. I
need to expand that.

 The plan, yet to be implemented, was to remove some of the cases of the
 new rule, making it easier to understand, specify and implement, at the
 expense of breaking more code when the switch is flipped.

I don't mind that if it makes sense. Would only work if ghc did it though as
no one is going to relayout their code for jhc (unless they are one of the
ones using it to compile to ARM boards with 32k of RAM :) ).

Maybe I can twiddle the algorithm in ghc itself for a bit first to take
advantage of the bigger accessible codebase to test on.

 Ideally, there would be a period during which compilers would check
 during compilation whether the new rule would give a different token
 sequence to the old rule, and warn if so.

Yeah, it would be tricky in jhc to support both front ends, though... maybe
I can revert my current lexer parser back to simpler haskell 98 syntax and
require anything that uses extensions to use the new layout rule.

Thanks,
John

--
John Meacham - http://notanumber.net/
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] ANNOUNCE: jhc-0.8.1

2014-05-13 Thread John Meacham
I modified it so all packages depended on are listed in only a single
spot in the configuration to make it easier to change and give much
better error messages. see the new
http://repetae.net/repos/jhc/configure.ac and the NEEDS_PACKAGE
macros.

Possibly existing instances are pretty easy to work around, see
USE_MONOID_DOC in the config file. I'll make the show instance for
identity conditionally defined too, since that seems to be floating
around. Having to specify specific versions in the script would be
hacky, much better to actually identify the reasons for
incompatibility and choose based on that.

John

On Mon, May 12, 2014 at 10:36 PM, Krzysztof Skrzętnicki
gte...@gmail.com wrote:
 Hmm, I'll give it a try, thanks!
 As for the other errors for sure there is missing Binary instance for strict
 ByteStrings in newest binary package.
 Another one is instance for Show (Identity a) which in turn needed to be
 commented out because it appeared in newer version of some other package,
 don't know which one.
 Lastly I get a lot of errors caused by mixing library versions. I *think*
 that the problem is that, unlike Cabal, the Makefile specifies -package foo
 without specific versions. The result is that, given the fact that I have
 more than 1 version of many libraries installed, it tries to build with two
 different versions of same library, or so would it seem: 1 is dependency of
 other library, the other is from -package specification. I think this is the
 case.
 This is why I asked for specific versions of all the libraries involved. The
 idea was to simply specify *all* of them on command line and/or Makefile.

 Right now I don't have time to replicate the errors, but I will try again
 building jhc later.


 Best regards,
 Krzysztof


 On Tue, May 13, 2014 at 5:09 AM, John Meacham j...@repetae.net wrote:

 Yeah, there was a bug in the way it detected editline/readline which
 has been fixed in the repo.

 You can run configure with --disable-line to work around it. or change
 the word USE_NOLINE to USE_READLINE in src/Util/Interact.hs

 always some silly typo that works its way in somewhere. I should stop
 the version number shift and declare it 1.0.0 and use the third digit
 for actual point releases rather than keep the perpetual 0.x.y wasting
 the first digit. but then I can't hide behind the 'beta' shield
 anymore. :)

 John

 On Mon, May 12, 2014 at 7:56 PM, Jens Petersen
 j...@community.haskell.org wrote:
  Thank you for the new release. :)
 
  On 13 May 2014 04:40, John Meacham j...@repetae.net wrote:
 
  as for the packages i've been testing with
 
 
 
  fgl,regex-compat,bytestring,binary,mtl,containers,unix,utf8-string,zlib,HsSyck,filepath,process,syb,old-time,pretty.
 
 
  and editline ?
 
  For me it fails to build on Fedora 20 with the readline package but
  completes with editline.
 
  Krzysztof: maybe try removing or hiding the readline package or posting
  your
  build error. :)
 
  Jens
 



 --
 John Meacham - http://notanumber.net/





-- 
John Meacham - http://notanumber.net/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ANNOUNCE: jhc-0.8.1

2014-05-12 Thread John Meacham
Hi, I need to update that page, I compile it with the ubuntu ghc and
the ubuntu packaged ones. Can you tell me what libraries you are
having issues with? the ./configure should tell you the names of any
that I expected might be an issue, I'll actually update it to check
and report on everything consistently. even ones I expect to come with
ghc. If there are compatibility issues between versions that's a bug I
can fix. (usually just a 'hiding' or explicit import will fix any such
issue)

as for the packages i've been testing with

fgl,regex-compat,bytestring,binary,mtl,containers,unix,utf8-string,zlib,HsSyck,filepath,process,syb,old-time,pretty.
Specific versions should not matter from anything back in ghc 7.2 days
to now, if there is a bug where it won't compile with a version
expected to be found in the wild, please feel free to report it.

John

On Mon, May 12, 2014 at 12:10 PM, Krzysztof Skrzętnicki
gte...@gmail.com wrote:
 Hello,

 I tried compiling jhc from source (compiled version doesn't work on my
 system) but after several attempts I just couldn't find a working set of
 libraries for it. Can you specify which versions of libraries are known to
 work for jhc?

 Best regards,
 Krzysztof Skrzętnicki


 On Sun, May 11, 2014 at 10:20 PM, John Meacham j...@repetae.net wrote:

 After a hiatus, jhc 0.8.1 is released.

 http://repetae.net/computer/jhc

 - New license, jhc is now released under a permissive BSD style licence
 rather
   than the GPL. The license is compatible with that of ghc allowing code
 mixing
   between them.

 - New library layout based around the standards, there are now haskell98
 and
   haskell2010 packages that are guarenteed to be future proof strictly
   compatible with the respective standards. A package haskell-extras
 contains
   the additonal libraries from ghc's base.

 - Native support for complex and vector SIMD primitives, exposed via type
   functions. for instance 'foo :: Complex_ Float32_' for hardware
 accelerated
   complex 32 bit floats for instance. These are unboxed only for now, full
   library Num support in the works.

 - support for android as a target, you must install the android NDK to use
 this.

 - Support for embedded ARM architectures imported from Kiwamu Okabe's
 branch
   allowing targeting bare hardware with no OS.

 - user defined kinds, introduced with the 'kind' keyword otherwise looking
 like
   'type' declarations.

 - export/import lists now allow namespace qualifiers kind, class, type, or
 data
   to explicitly only import or export the specific named entity. As an
   extension allowed by this, classes and types no longer are in the same
   namespace and can share names.

 - ForeignPtr's now have working finalizers when collected by the RTS.

 - CTYPE pragma to allow promoting arbitrary C types to FFIable entities.
 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell





-- 
John Meacham - http://notanumber.net/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ANNOUNCE: jhc-0.8.1

2014-05-12 Thread John Meacham
Yeah, there was a bug in the way it detected editline/readline which
has been fixed in the repo.

You can run configure with --disable-line to work around it. or change
the word USE_NOLINE to USE_READLINE in src/Util/Interact.hs

always some silly typo that works its way in somewhere. I should stop
the version number shift and declare it 1.0.0 and use the third digit
for actual point releases rather than keep the perpetual 0.x.y wasting
the first digit. but then I can't hide behind the 'beta' shield
anymore. :)

John

On Mon, May 12, 2014 at 7:56 PM, Jens Petersen
j...@community.haskell.org wrote:
 Thank you for the new release. :)

 On 13 May 2014 04:40, John Meacham j...@repetae.net wrote:

 as for the packages i've been testing with


 fgl,regex-compat,bytestring,binary,mtl,containers,unix,utf8-string,zlib,HsSyck,filepath,process,syb,old-time,pretty.


 and editline ?

 For me it fails to build on Fedora 20 with the readline package but
 completes with editline.

 Krzysztof: maybe try removing or hiding the readline package or posting your
 build error. :)

 Jens




-- 
John Meacham - http://notanumber.net/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Does GHC 7.8 make targeting bare metal ARM any easier?

2013-03-20 Thread John Meacham
kiwamu has been targeting an arm cortex-m3 succesfully with jhc. this
is a CPU with 40k of RAM running Haskell code very much on bare metal.
:)

John

On Tue, Mar 19, 2013 at 6:07 PM, Jeremy Shaw jer...@n-heptane.com wrote:
 There have been at least a couple projects, such as hOp and HaLVM
 which attempt to run GHC on the bare metal or something similar.

 Both these projects required a substantial set of patches against GHC
 to remove dependencies things like POSIX/libc. Due to the highly
 invasive nature, they are also highly prone to bitrot.

 With GHC 7.8, I believe we will be able to cross-compile to the
 Raspberry Pi platform. But, what really appeals to me is going that
 extra step and avoiding the OS entirely and running on the bare metal.
 Obviously, you give up a lot -- such as drivers, network stacks, etc.
 But, there is also a lot of potential to do neat things, and not have
 to worry about properly shutting down an embedded linux box.

 Also, since the raspberry pi is a very limited, uniform platform,
 (compared to general purpose PCs) it is feasible to create network
 drivers, etc, because only one chipset needs to be supported.
 (Ignoring issues regarding binary blobs, undocumented chipsets, usb
 WIFI, etc).

 I'm wondering if things are any easier with cross-compilation support
 improving. My thought is that less of GHC needs to be tweaked?

 - jeremy

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Does GHC 7.8 make targeting bare metal ARM any easier?

2013-03-20 Thread John Meacham
kiwamu has been targeting an arm cortex-m3 succesfully with jhc. this
is a CPU with 40k of RAM running Haskell code very much on bare metal.
:)

John

On Tue, Mar 19, 2013 at 6:07 PM, Jeremy Shaw jer...@n-heptane.com wrote:
 There have been at least a couple projects, such as hOp and HaLVM
 which attempt to run GHC on the bare metal or something similar.

 Both these projects required a substantial set of patches against GHC
 to remove dependencies things like POSIX/libc. Due to the highly
 invasive nature, they are also highly prone to bitrot.

 With GHC 7.8, I believe we will be able to cross-compile to the
 Raspberry Pi platform. But, what really appeals to me is going that
 extra step and avoiding the OS entirely and running on the bare metal.
 Obviously, you give up a lot -- such as drivers, network stacks, etc.
 But, there is also a lot of potential to do neat things, and not have
 to worry about properly shutting down an embedded linux box.

 Also, since the raspberry pi is a very limited, uniform platform,
 (compared to general purpose PCs) it is feasible to create network
 drivers, etc, because only one chipset needs to be supported.
 (Ignoring issues regarding binary blobs, undocumented chipsets, usb
 WIFI, etc).

 I'm wondering if things are any easier with cross-compilation support
 improving. My thought is that less of GHC needs to be tweaked?

 - jeremy

 ___
 Glasgow-haskell-users mailing list
 glasgow-haskell-us...@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [jhc] ANNOUNCE: Ajhc 0.8.0.2 Release

2013-03-16 Thread John Meacham
I have merged your changes back into the main jhc tree. Thanks!
John

On Sat, Mar 16, 2013 at 5:28 AM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 We are happy to announce Ajhc 0.8.0.2.

 It's first release announce for Ajhc.
 Major change on this release is ability to compile Haskell code for tiny CPU.
 There is demo on tiny CPU at https://github.com/ajhc/demo-cortex-m3.
 And you can watch demo movie at http://www.youtube.com/watch?v=bKp-FC0aeFE.
 Perhaps changes on the announce will be merged to jhc.

 Ajhc's project web site is found at http://ajhc.masterq.net/.
 You can get Ajhc 0.8.0.2 source code from https://github.com/ajhc/ajhc/tags.

 ## Changes

 * Fix warning messages on compiling.
 * Ready to compile with GHC 7.6.2.
 * New RTS for tiny CPU. How to use:
   https://github.com/ajhc/demo-cortex-m3#porting-the-demo-to-a-new-platform

 - - -
 Metasepi team

 ___
 jhc mailing list
 j...@haskell.org
 http://www.haskell.org/mailman/listinfo/jhc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Start Ajhc project with forking jhc.

2013-03-06 Thread John Meacham
What is the cortex m3 board you are experimenting with? looks like it
could be a Maple Mini https://www.sparkfun.com/products/11280 ?

if so, getting it in 20k of ram is quite impressive :) I only tested
against larger ARM processors such as tablets/cell phones.

John

On Wed, Mar 6, 2013 at 4:51 AM, Kiwamu Okabe kiw...@debian.or.jp wrote:
 Hi all.

 I am a user of jhc Haskell compiler.
 Jhc can compile Haskell code to micro arch such as Cortex-M3.
 I have written LED blinking demo for Cortex-M3 with jhc.
 Very fun!

   https://github.com/ajhc/demo-cortex-m3
   http://www.youtube.com/watch?v=3R9sogReVHg

 And I created many patches for jhc.
 But...I think that the upstream author of jhc, John Meacham,
 can't pull the contribution speedy, because he is too busy.
 It's difficult that maintain many patches without any repositories,
 for me.

 Then, I have decided to fork jhc, named Ajhc.
 # pain full...

   http://ajhc.github.com/

 I will feedback Ajhc's big changes to jhc mailing list.
 Or I am so happy if John joins Ajhc project.

 Regards,
 --
 Kiwamu Okabe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: data kinds

2013-01-26 Thread John Meacham
On Fri, Jan 25, 2013 at 7:19 AM, Ross Paterson r...@soi.city.ac.uk wrote:

 GHC implements data kinds by promoting data declarations of a certain
 restricted form, but I wonder if it would be better to have a special
 syntax for kind definitions, say

   data kind Nat = Zero | Succ Nat


This is exactly the syntax jhc uses for user defined kinds.

John
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Record syntax, reopening a can of worms.

2012-05-26 Thread John Meacham
Is it any more ridiculous than

 f x@Nothing {} = fromJust x
 main = print (f Nothing)

crashing at run time? That is what you are expressing with your first
one. This issue is completely unrelated to the named field syntax,
they behave exactly like data types with non-named fields.

However, you can achieve something like what you want with phantom types.

 data ALike
 data BLike

data MyData t =  A {a::Int,
b::Int} |
 B {c::Int}

 mkA x y = A x y :: MyData ALike
 mkB x = B x :: MyData BLike

then you can write functions of
'MyData ALike' to indicate it will only have 'A' as a constructor
'MyData BLike' to indicate it will only have 'B'
and 'forall t . MyData t' for functions that can take a general MyData
that can have either.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Types of when and unless in Control.Monad

2012-04-21 Thread John Meacham
Yes, this has always bothered me too.

   John

On Sat, Apr 21, 2012 at 2:51 AM, Andreas Abel andreas.a...@ifi.lmu.de wrote:
 In Control.Monad, when has type

  when :: Monad m = Bool - m () - m ()

 I think this type should be generalized to

  when :: Monad m = Bool - m a - m ()

 to avoid silly return () statements like in

  when cond $ do
    monadicComputationWhoseResultIWantToDiscard
    return ()

 Cheers,
 Andreas

 P.S.:  A more systematic solution would be to change the Haskell language by
 either introducing a Top type which is the supertype of everything and use
 it instead of ().

 Alternatively, make () the top type and interpret matching against the empty
 tuple as just computing the weak head normal form and then discarding the
 result.  The latter is needed to preserve the current behavior of

  (\ () - bla) (error raised)

 vs.

  (\ _ - bla) (error not raised)


 --
 Andreas Abel        Du bist der geliebte Mensch.

 Theoretical Computer Science, University of Munich
 Oettingenstr. 67, D-80538 Munich, GERMANY

 andreas.a...@ifi.lmu.de
 http://www2.tcs.ifi.lmu.de/~abel/

 ___
 Haskell mailing list
 Haskell@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] New in haskell for old-timers?

2012-03-30 Thread John Meacham
On Fri, Mar 30, 2012 at 1:05 PM, Mats Rauhala mats.rauh...@gmail.com wrote:
 Oh wow, I thought jhc was discontinued, but just checked the
 repositories and mailing lists and it's alive and well. No idea where I
 got the idea that it was discontinued. Going a little bit on tangent
 here, but if I understood correctly, jhc is meant to do more
 optimization. How does this compare to for example ghc?

I occasionally take some time off for other projects, but jhc is alive and well.
 The C produced is quite legible and even better, portable. Everything
from the Nintendo DS to the iPhone has been a target.

I am always welcoming active developers. No better way to kick me into
jhc mode than submitting some patches. :)

0.8.1 is almost due to be put out, it will be the first to be 100%
haskell 2010 (and haskell 98) compliant and has a lot of other neat
features over 0.8.0.

   John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Stealing ideas from the latest GCC release

2012-03-22 Thread John Meacham
I support a form of this in jhc by allowing specialization of values,
not just types.
It is actually the same mechanism as type specialization since that is
just value specialization where the value being specialized on is the
type parameter.

foo :: Bool - Int

{-# SPECIALIZE foo True :: Int #-}
(I think this is the current syntax, I changed it a couple times in
jhc's history)

will create a

foo_True :: Int
foo_True = inline foo True

and a

{-# RULE foo True = foo_True #-}

   John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Is there a better way to subtyping?

2012-03-13 Thread John Meacham
Why not

data Super
= SuperA {
commonFields :: ()
aFields :: ()
}
| SuperB {
commonFields :: ()
bFields :: ()
}
| SuperC {
commonFields :: ()
cFields :: ()
}

reusing the common field names between constructors like this is a-okay.

   John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Getting the file descriptor of a handle, without closing it

2012-03-10 Thread John Meacham
Can you use 'dup' to copy the file descriptor and return that version?
That will keep a reference to the file even if haskell closes the
original descriptor.

John


On Sat, Mar 10, 2012 at 5:31 PM, Volker Wysk p...@volker-wysk.de wrote:
 Hi

 This is an addition to my previous post.


 This modified version of main seems to work:

 main = do

   fd - unsafeWithHandleFd stdin return
   putStrLn (stdin: fd =  ++ show fd)

   fd - unsafeWithHandleFd stdout return
   putStrLn (stdout: fd =  ++ show fd)


 The way I understand it, unsafeWithHandleFd's job is to keep a reference to
 the hande, so it won't be garbage collected, while the action is still
 running. Garbage collecting the handle would close it, as well as the
 underlying file descriptor, while the latter is still in use by the action.
 This can't happen as long as use of the file descriptor is encapsulated in the
 action.

 This encapsulation can be circumvented by returning the file descriptor, and
 that's what the modified main function above does. This should usually never 
 be
 done.

 However, I want to use it with stdin, stdout and stderr, only. These three
 should never be garbage collected, should they? I think it would be safe to
 use unsafeWithHandleFd this way. Am I right?


 unsafeWithHandleFd is still broken (see previous message), but for my purposes
 it wouldn't necessarily need to be fixed.


 Happy hacking
 Volker Wysk

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How unsafe are unsafeThawArray# and unsafeFreezeArray#

2012-03-09 Thread John Meacham
Out of curiosity, Is the reason you keep track of mutable vs not
mutable heap allocations in order to optimize the generational garbage
collector? as in, if a non-mutable value is placed in an older
generation you don't need to worry about it being updated with a link
to a newer one or is there another reason you keep track of it? Is it
a pure optimization or needed for correctness?

A weakness of jhc right now is its stop the world garbage collector,
so far, I have been mitigating it by not creating garbage whenever
possible, I do an escape analysis and allocate values on the stack
when possible, and recognize linear uses of heap value in some
circumstances and re-use heap locations directly (like when a cons
cell is deconstructed and another is constructed right away I can
reuse the spacue in certain cases) but eventually a full GC needs to
run and it blocks the whole runtime which is particularly not good for
embedded targets (where jhc seems to be thriving at the moment.). My
unsafeFreeze and unsafeThaw are currently NOPs. frozen arrays are just
a newtype of non-frozen ones (see
http://repetae.net/dw/darcsweb.cgi?r=jhc;a=headblob;f=/lib/jhc-prim/Jhc/Prim/Array.hs)

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Global Arrays

2012-03-09 Thread John Meacham
On Fri, Mar 9, 2012 at 12:48 PM, Clark Gaebel
cgae...@csclub.uwaterloo.ca wrote:
 static const double globalArray[] = { huge list of doubles };
 double* getGlobalArray() { return globalArray; }
 int        getGlobalArraySize() { return
 sizeof(globalArray)/sizeof(globalArray[0]); }

 And importing it in haskell witht he FFI, followed with an unsafeCast:

 foreign import ccall unsafe getGlobalArray c_globalArray :: Ptr CDouble
 foreign import ccall unsafe getGlobalArraySize c_globalArraySize :: CInt

You can use Data.Array.Storable to do this.
http://hackage.haskell.org/packages/archive/array/0.3.0.3/doc/html/Data-Array-Storable.html

Also, there is no need to create stub C functions, you can foreign import
the array directly
And if you don't want to cast between CDouble and Double you can declare
your array to be of HsDouble and #include HsFFI.h

const HsDouble globalArray[] = { huge list of doubles };
foreign import ccall unsafe globalArray :: Ptr Double

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Global Arrays

2012-03-09 Thread John Meacham
On Fri, Mar 9, 2012 at 5:49 PM, Clark Gaebel
cgae...@csclub.uwaterloo.ca wrote:
 What's the advantage of using D.A.Storable over D.Vector? And yes,
 good call with creating an array of HSDouble directly. I didn't think
 of that!

Oh, looks like D.Vector has an unsafeFromForeignPtr too, I didn't see
 that. so D.Vector should work just fine. :)

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread John Meacham
L = lazy
S = strict
A = absent

f :: Int - (Char,Char) - Int - Char

LS(S,L)A

means that it is lazy in the first int, strict in the tuple, strict in
the first argument of the tuple but lazy in the second and the third
argument is not used at all. I have a paper that describes it
somewhere. I modeled the jhc strictness analyzer after the ghc one
(with minor hindsight improvements) so pored over the ghc one for
quite a while once upon a time.

John

On Wed, Mar 7, 2012 at 3:21 PM, Johan Tibell johan.tib...@gmail.com wrote:
 Hi,

 If someone could clearly specify the exact interpretation of these
 LLSL(ULL) strictness/demand annotations shown by ghc --show-iface I'd
 like to try to write a little tool that highlights the function
 argument binding in an IDE (e.g. Emacs) with this information. Anyone
 care to explain the syntax?

 Cheers,
 Johan

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread John Meacham
On Wed, Mar 7, 2012 at 3:26 PM, John Meacham j...@repetae.net wrote:
 L = lazy
 S = strict
 A = absent

 f :: Int - (Char,Char) - Int - Char

 LS(S,L)A

 means that it is lazy in the first int, strict in the tuple, strict in
 the first argument of the tuple but lazy in the second and the third
 argument is not used at all. I have a paper that describes it
 somewhere. I modeled the jhc strictness analyzer after the ghc one
 (with minor hindsight improvements) so pored over the ghc one for
 quite a while once upon a time.

Oh, and the (..) works for all CPR types, not just tuples.

   John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread John Meacham
Ah, looks like it got a bit more complicated since I looked at it
last... time to update jhc :)

Actually. not sure if the Eval/Box split is relevant to my core. hmm

   John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread John Meacham
On Wed, Mar 7, 2012 at 5:01 PM, Johan Tibell johan.tib...@gmail.com wrote:
 On Wed, Mar 7, 2012 at 4:38 PM, Brandon Allbery allber...@gmail.com wrote:
 I think the original type signature is needed to figure it out.  In the
 earlier example it indicated ghc drilling down into the type (a tuple) and
 determining the strictness of the constituents.

 I parenthesis were for tuples I would never expect to see e.g. U(L).

They are for all CPR types, not just tuples. so that could be

data Foo = Foo Int

It also may omit values for which is has no information for, I can't
recall if ghc does that or not.

   John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2012-03-07 Thread John Meacham
On Mon, Dec 19, 2011 at 7:10 PM, Alexander Solla alex.so...@gmail.com wrote:
 * Documentation that discourages thinking about bottom as a 'value'.  It's
 not a value, and that is what defines it.

The fact that bottom is a value in Haskell is the fundamental thing that
differentiates Haskell from other languages and the source of its power. The
fact that f _|_ /= _|_ potentially _is_ what it means to be a lazy language.
It is what allows us to implement loops and conditionals as normal functions
rather than requiring special and limited language contexts. But more
importantly, it is required to think about _|_ as a value in order to prove
the correctness of many algorithms, it is the base case of your inductive
reasoning. A Haskell programmer who cannot think of _|_ in that way will
plateau as very useful idioms such as tying the knot, infinite data
structures, etc.. are just complicated to think about in operational
semantics when they get more interesting than an infinite list. Not treating
_|_ as a value would be a huge disservice to anyone learning the language.
Sure, it may seem a little strange coming from the imperative world to think
of it as a value, but it is by far the oddest concept in Haskell, after all,
_functions_ are values in Haskell and people seem to eventually figure that
out.

   John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: FW: 7.4.1-pre: Show Integral

2012-03-06 Thread John Meacham
On Sat, Dec 24, 2011 at 12:37 PM, Ian Lynagh ig...@earth.li wrote:
 On Fri, Dec 23, 2011 at 05:41:23PM +, Simon Peyton-Jones wrote:
 I'm confused too.  I'd welcome clarification from the Haskell Prime folk.

 We use the library process to agree changes to the libraries, and
 Haskell' should then incorporate the changes into the next version of
 the standard.

FWIW, the library change process is nowhere near rigorous enough to
decide what should go into a language standard. Not that some good
 ideas have not been explored, but before adding them to a language
standard, they would require considerably more discussion.

   John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: simple extension to ghc's record disambiguation rules

2012-02-19 Thread John Meacham
On Sun, Feb 19, 2012 at 4:21 PM, AntC anthony_clay...@clear.net.nz wrote:
Hi, I'd like to propose an extremely simple extension to ghc's record
disambiguation rules,
 I wonder if John is teasing us? Nothing wrt to records is simple (IMHO).

That is rather defeatist. Degree of simplicity is actually something that very
quickly becomes very relevant and apparent as soon as one starts implementing
these proposals. Luckily, experience gives us some ability to figure out what
will be simple and what won't be.

 John seems to be unaware of the threads on 'Records in Haskell' (ghc-users)
 or 'Type-Directed Name Resolution' (cafe) that have been storming for the
 last few months.

Trust me, I am completely aware of them and have been following these ideas in
all their incarnations over the years. Seeing as how I will likely be one of
the people that actually implements said ideas, I do keep on top of such
things.

 He refers to a web page ...
 ideally it would be combined with the 'update' and 'label-based
 pattern-matching' extensions from this page
 http://hackage.haskell.org/trac/haskell-prime/wiki/ExistingRecords

 ... which is several years old, requesting features that have been largely
 delivered in
 ghc's -XDIsambiguateRecordFields, -XNamedFieldPuns, and -XRecordWildCards.

Neither of those extensions I specifically mention are included in those you
list. Nor are in any way delivered by them.  Though, those extensions you do
list are quite handy indeed and I am glad to see ghc support them and in fact,
what I am proposing is dependent on them.

 my motivation is that I often have record types with multiple constructors
 but common fields.

 Perhaps this is a different requirement to those threads above?
 Most are dealing with the namespacing problem of common fields in different
 record types.
 I think John means common fields under different constructors within the
 same type(?).

No, I am trying to deal with namespace problems of common fields in different
labled field tyes. I refer to different constructors within the same type
because that is where the common mechanisms for dealing with said ambiguities
(such as the DisambiguateRecordFields extension) break down when presented with
such types.

The basic issue is that in order for DisambiguateRecordFields to work it needs
an unambiguous indicator of what type the record update is called at, currently
it can use the explicit constructor name as said indicator. my proposal is
simply to also allow an explicit type to have the same effect.

 Extremely simple? I don't think so.

Simple can be quantified here pretty easily

Ways in which it is simple:
* requires no changes to existing desugarings
* conservative in that it doesn't change the meaning of existing programs
* well tested in that it is an application of an existing extension
  (DisambiguateRecordFields) to a slighly broader context.

Ways in which it is complicated:
* record field names now depend on context. (already true with the
  DisambiguateRecordFields extension.)
* requires information transfer from the type checker to the renamer. (or some
  renaming to be delayed until the type checker). This is a common
feature of some
  proposed extensions out there, however this extension explicitly cuts the
  recursive knot requiring an explicit type signature, so while the renaming
  is affected by typing, the typing will not be affected by said renaming. A
  particularly thorny problem to resolve with some other proposals.


Your DORF proposal is interesting too, but is not really directly related to
this relatively local change to an existing extension. There is no reason to
conflate them.

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Removal of #include HsFFI.h from template-hsc.h breaks largefile support on 32bit Linux

2012-02-17 Thread John Meacham
It isn't local to a file though because it changes the ABI, for instance

void foo(off_t *x);

it will blow up if called from a file with a differently sized off_t.

John

On Fri, Feb 17, 2012 at 4:23 AM, Simon Marlow marlo...@gmail.com wrote:
 On 16/02/2012 13:25, Eugene Crosser wrote:

 Hello Simon, thanks for your attention :)

 On 02/16/2012 04:25 PM, Simon Marlow wrote:

 I found that earlier versions of hsc2hs included HsFFI.h into the

 [...]

 As I understand, this situation means that while the ghc itself and
 haskell programs compiled by it are largefile-capable, any third party
 modules that contain .hsc files are not. If I am right, this is probably
 not a good thing.


 We discovered this during the 7.4 cycle:

   http://hackage.haskell.org/trac/ghc/ticket/2897#comment:12

 Packages that were relying on `HsFFI.h` to define `_FILE_OFFSET_BITS`
 should no longer do this, instead they should use an appropriate
 autoconf script or some other method.  See the `unix` package for an
 example of how to do this.  It was really a mistake that it worked
 before.


 But that means that the C build environment has to be constructed
 independently for each module (that needs it), and consequently is not
 guaranteed to match the compiler's environment. Would it be better (more
 consistent) to propagate GHC's (or other compiler's) environment by
 default, along the lines of the comment #16? To cite Duncan, each
 Haskell implementation has its own C environment, and hsc2hs must use
 that same environment or it will produce incorrect results.

 Just a thought, and, as I said, I am not really qualified to argue...


 Well, the question of whether to use 64-bit file offsets or not really has
 nothing to do with GHC itself.  The choice is made in the base package and
 is only visible via the definition of Foreign.C.Types.COff and through the
 unix package.  In fact, there's nothing stopping your own package from using
 32-bit file offsets if you want to.

 The time you would want to be compatible is if you want to make your own FFI
 declarations that use Foreign.C.Types.COff.  In that case you need to know
 that the base package is using _FILE_OFFSET_BITS=64 and do the same thing.

 Cheers,
        Simon



 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Removal of #include HsFFI.h from template-hsc.h breaks largefile support on 32bit Linux

2012-02-17 Thread John Meacham
On Fri, Feb 17, 2012 at 2:12 PM, Simon Marlow marlo...@gmail.com wrote:
 On 17/02/12 19:36, John Meacham wrote:

 It isn't local to a file though because it changes the ABI, for instance

 void foo(off_t *x);

 it will blow up if called from a file with a differently sized off_t.


 But we're talking about Haskell code here, not C code.  There's no way for
 something to blow up, the typechecker will catch any discrepancies.

 Perhaps I don't understand what problem you're thinking of - can you give
 more detail?

Someone writes a C function that returns an off_t * that is foreign
imported by a haskell
program using Ptr COff, the haskell program then writes to the output
pointer with the COff
Storable instance. However the imported function was compiled  without
64 bit off_t's so it
only allocated 32 bits to be written into so some other memory gets
overwritten with garbage.
64 bit off_t's change the ABI of called C functions
much like passing -m32 or -mrtd does so should be considered a
all-or-nothing sort of thing. In
particular, when ghc compiles C code, it should make sure it does it
with the same ABI flags
as the rest of the thing being compiled.

 John
environment as the rest of the code.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


simple extension to ghc's record disambiguation rules

2012-02-17 Thread John Meacham
Hi, I'd like to propose an extremely simple extension to ghc's record
disambiguation rules,

my motivation is that I often have record types with multiple constructors
but common fields.

so the handy idiom of

f Rec { .. } = do
blah
return Rec { .. }

won't work, because I don't know the specific constructor.

so, my proposal is that when you come across something like

(e::RecType) { blah = foo }

(with an explicit type signature like shown)

You interpret 'blah' as if it is a field of the record of type 'Rec'. This
gives the advantages of record field names being scoped by type but without
you having to specify the precise constructor.

It is also backwards compatible for expressions, but would be a new thing
for patterns which generally don't allow type signatures there.

It sidesteps type checker interactions by only being triggered when an
explicit type annotation is included.

ideally it would be combined with the 'update' and 'label-based
pattern-matching' extensions from this page
http://hackage.haskell.org/trac/haskell-prime/wiki/ExistingRecords

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


simple extension to ghc's record disambiguation rules

2012-02-17 Thread John Meacham
Hi, I'd like to propose an extremely simple extension to ghc's record
disambiguation rules,

my motivation is that I often have record types with multiple constructors
but common fields.

so the handy idiom of

f Rec { .. } = do
blah
return Rec { .. }

won't work, because I don't know the specific constructor.

so, my proposal is that when you come across something like

(e::RecType) { blah = foo }

(with an explicit type signature like shown)

You interpret 'blah' as if it is a field of the record of type 'Rec'. This
gives the advantages of record field names being scoped by type but without
you having to specify the precise constructor.

It is also backwards compatible for expressions, but would be a new thing
for patterns which generally don't allow type signatures there.

It sidesteps type checker interactions by only being triggered when an
explicit type annotation is included.

ideally it would be combined with the 'update' and 'label-based
pattern-matching' extensions from this page
http://hackage.haskell.org/trac/haskell-prime/wiki/ExistingRecords

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Removal of #include HsFFI.h from template-hsc.h breaks largefile support on 32bit Linux

2012-02-16 Thread John Meacham
I have similar issues to this in jhc due to its pervasive caching of
compilation results. Basically I must keep track of any potentially
ABI-changing flags and ensure they are consistently passed to every
compilation unit and include them in the signature hash along with the
file contents. I make sure to always pass said flags on the command
line to all the tools, as in -D_FILE_OFFSET_BITS=64 gets passed to
both gcc and hsc2hs rather than relying on a config file.

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unpack primitive types by default in data

2012-02-16 Thread John Meacham
FWIW

jhc has always unboxed everything smaller or equal to the size of a pointer
unconditionally. It's all about the cache performance.

John

On Thu, Feb 16, 2012 at 4:25 PM, Johan Tibell johan.tib...@gmail.com wrote:
 Hi all,

 I've been thinking about this some more and I think we should
 definitely unpack primitive types (e.g. Int, Word, Float, Double,
 Char) by default.

 The worry is that reboxing will cost us, but I realized today that at
 least one other language, Java, does this already today and even
 though it hurts performance in some cases, it seems to be a win on
 average. In Java all primitive fields get auto-boxed/unboxed when
 stored in polymorphic fields (e.g. in a HashMap which stores keys and
 fields as Object pointers.) This seems analogous to our case, except
 we might also unbox when calling lazy functions.

 Here's an idea of how to test this hypothesis:

  1. Get a bunch of benchmarks.
  2. Change GHC to make UNPACK a no-op for primitive types (as library
 authors have already worked around the lack of unpacking by using this
 pragma.)
  3. Run the benchmarks.
  4. Change GHC to always unpack primitive types (regardless of the
 presence of an UNPACK pragma.)
  5. Run the benchmarks.
  6. Compare the results.

 Number (1) might be what's keeping us back right now, as we feel that
 we don't have a good benchmark set. I suggest we try with nofib first
 and see if there's a different and then move on to e.g. the shootout
 benchmarks.

 I imagine that ignoring UNPACK pragmas selectively wouldn't be too
 hard. Where the relevant code?

 Cheers,
 Johan

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unpack primitive types by default in data

2012-02-16 Thread John Meacham
On Thu, Feb 16, 2012 at 4:42 PM, Johan Tibell johan.tib...@gmail.com wrote:
 1. In theory the user could create a cut-n-paste copy of the data
 structure and specialize it to a particular type, but I think we all
 agree that would be unfortunate (not to say that it cannot be
 justified in extreme cases.)

I thought data families could help here, as in, you could do a SPECIALIZE on
a data type and the compiler will turn it into a data family behind the
scenes and specialize everything to said type. though, that wouldn't do
things like turn a Data.Map into a Data.IntMap...

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] [Haskell-cafe] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-16 Thread John Meacham
On Thu, Feb 16, 2012 at 1:20 PM, Ian Lynagh ig...@earth.li wrote:
 I've now implemented this in GHC. For now, the syntax is:

 type    {-# CTYPE some C type #-} Foo = ...
 newtype {-# CTYPE some C type #-} Foo = ...
 data    {-# CTYPE some C type #-} Foo = ...

 The magic for (Ptr a) is built in to the compiler.

Heh. I just added it for jhc too with the exact same syntax. :)

the difference is that I do not allow them for 'type' declarations, as
dusugaring of types happens very early in compilation, and it feels sort
of wrong to give type synonyms meaning. like I'm breaking referential
transparency or something..

I also allow foreign header declarations just like with ccall.
data {-# CTYPE stdio.h FILE #-} CFile

will mean that 'stdio.h' needs to be included for FILE to be declared.

   John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] [Haskell] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-16 Thread John Meacham
On Thu, Feb 16, 2012 at 1:20 PM, Ian Lynagh ig...@earth.li wrote:
 I've now implemented this in GHC. For now, the syntax is:

 type    {-# CTYPE some C type #-} Foo = ...
 newtype {-# CTYPE some C type #-} Foo = ...
 data    {-# CTYPE some C type #-} Foo = ...

 The magic for (Ptr a) is built in to the compiler.

Heh. I just added it for jhc too with the exact same syntax. :)

the difference is that I do not allow them for 'type' declarations, as
dusugaring of types happens very early in compilation, and it feels sort
of wrong to give type synonyms meaning. like I'm breaking referential
transparency or something..

I also allow foreign header declarations just like with ccall.
data {-# CTYPE stdio.h FILE #-} CFile

will mean that 'stdio.h' needs to be included for FILE to be declared.

   John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Changes to Typeable

2012-02-14 Thread John Meacham
Proxy also has the advantage that it almost exactly mirrors what it
ends up looking
like in core. The application to proxy is the user visible type application.

John

On Tue, Feb 14, 2012 at 8:18 AM, Iavor Diatchki
iavor.diatc...@gmail.com wrote:
 Hello,

 On Mon, Feb 13, 2012 at 5:32 PM, Edward Kmett ekm...@gmail.com wrote:

 There are fewer combinators from commonly used classes for working with
 the left argument of a bifunctor, however.


 I think that the bifunctor part of Bas's version is a bit of a red herring.
  What I like about it is that it overloads exactly what needs to be
 overloaded---the representation of the type---without the need for any fake
 parameters.  To make things concrete, here is some code:

 newtype TypeRepT t = TR TypeRep

 class Typeable t where
   typeRep :: TypeRepT t

 instacne Typeable Int where typeRep = TR type_rep_for_int
 instance Typeable []  where typeRep = TR type_rep_for_list

 The two formulations support exactly the same interface (you can define
 `Proxy` and the proxied `typeRep` in terms of this class) so I wouldn't say
 that the one is easier to use that the other, but I think that this
 formulation is slightly simpler because it avoids the dummy parameter to
 typeRep.

 -Iavor



 ___
 Libraries mailing list
 librar...@haskell.org
 http://www.haskell.org/mailman/listinfo/libraries


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [jhc] new extension in jhc: explicit namespaces in import/export lists

2012-02-13 Thread John Meacham
On Sun, Feb 12, 2012 at 11:26 PM, Roman Cheplyaka r...@ro-che.info wrote:
 * John Meacham j...@repetae.net [2012-02-12 19:26:24-0800]
 In haskell 98 [...]

 Not sure what you mean here. You aren't going to modify an existing
 standard, are you? :)
 [...] a name such as 'Foo' in an export list will indicate that all of
 a class named Foo, a type named 'Foo' and a data constructor named
 'Foo' shoud be exported.

 This bit doesn't sound right... I think this behaviour would be
 something that people will more often fight against (by using
 namespaces) than appreciate. (Esp. that Foo exports both the type and
 the data constructor.)

Yeah, I worded it incorrectly, actually it is more that I read my own
code incorrectly :)
The data constructor isn't matched. as stands
my extension behaves identically to h2010 rules when no namespace
specifiers are used.  When a namespace specifier
is used, the declaration is restricted to just the names that match
that namespace.
So it is transparent when not used.

 How about this:

 Foo in the export list may refer to a class, a type or a kind (but not a
 data constructor). It is an error if multiple entities with the name
 Foo are in scope.

Allowing multiple things of the same name was part of the goal, in
that you can use the explicit namespaces to restrict it if it is what you
intend.. But I suppose that could be considered an independent change.

In any case, part of the reason for implementing this in jhc was to
experiment with variants like this too see what works.

Actually, perhaps a better rule would be it is an error if multiple entities
within the same namespace are matched

hmm.. not sure on that one yet...

 I see your point regarding 'hiding' inconsistency, but I'd prefer having
 'hiding' fixed (in a similar way).

Yeah, it is just annoying that there is this one exception.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Stable pointers: use of cast to/from Ptr

2012-02-12 Thread John Meacham
No, you can do nothing with the pointer on the C side other than pass
it back into haskell. It may not even be a pointer, it may be an index
into an array deep within the RTS for instance. The reason they can be
cast to void *'s is so you can store them in C data structures that
don't know about haskell, which tend to take void *s.

John

On Sun, Feb 12, 2012 at 6:18 AM, Yves Parès yves.pa...@gmail.com wrote:
 Hello,

 According to the documentation
 (http://hackage.haskell.org/packages/archive/base/4.5.0.0/doc/html/Foreign-StablePtr.html),
 StablePtrs aims at being opaque on C-side.
 But they provide functions to be casted to/from regular void*'s.
 Does that mean if for instance you have a StablePtr CInt you can cast it to
 Ptr () and alter it on C-side?

 void alter(void* data)
 {
     int* x = (int*)data;
     *x = 42;
 }

 --

 -- using 'unsafe' doesn't change anything.
 foreign import ccall safe alter
     alter :: Ptr () - IO ()

 main = do
     sptr - newStablePtr (0 :: CInt)
     deRefStablePtr sptr = print
     alter (castStablePtrToPtr sptr)  -- SEGFAULTS!
     deRefStablePtr sptr = print
     freeStablePtr sptr


 But I tried it, and it doesn't work: I got a segfault when 'alter' is
 called.

 Is it normal? Does this mean I can only use my pointer as opaque? (Which I
 know to be working, as I already got a C function call back into Haskell and
 pass it the StablePtr via a 'foreign export')
 But in that case, what is the use of castStablePtrToPtr/castPtrToStablePtr,
 as you can already pass StablePtrs to and from C code?

 Thanks!

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Changes to Typeable

2012-02-11 Thread John Meacham
I am not so sure, adding type applications to the language seems
fairly radical and will change many aspects of the language. Something
like Proxy that can be expressed in relatively vanilla haskell and
some handy desugarings is much more attractive to me.

With type apllications you end up with odd cases you need to figure
out,  like forall a b. (a,b) and forall b a. (a,b) meaning different
things maybe depending on the details of the impementation Also,
it meshes with a core language based on type applications, like system
F or jhc's PTS. However, it seems quite plausible that there are other
ways of writing haskell compilers. Not that i am opposed to them, I
just think they are way overkill for this situation and any solution
based on them will be ghc-bound for a long time probably.

John

On Sat, Feb 11, 2012 at 5:23 PM, Roman Leshchinskiy r...@cse.unsw.edu.au 
wrote:
 On 10/02/2012, at 23:30, John Meacham wrote:

 something I have thought about is perhaps a special syntax for Proxy, like
 {:: Int - Int } is short for (Proxy :: Proxy (Int - Int)). not sure whether
 that is useful enough in practice though, but could be handy if we are 
 throwing
 around types a lot.

 We really need explicit syntax for type application. There are already a lot 
 of cases where we have to work around not having it (e.g., Storable) and with 
 the new extensions, there are going to be more and more of those.

 Roman



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Changes to Typeable

2012-02-11 Thread John Meacham
On Fri, Feb 10, 2012 at 2:24 PM, Ian Lynagh ig...@earth.li wrote:
 But it would be better if they could use the new definition. Is
 PolyKinds sufficiently well-defined and simple that it is feasible for
 other Haskell implementations to implement it?

There is actually a much simpler extension I have in jhc called
'existential kinds' that can
express this.

Existential kinds grew from the need to express the kind of the
function type constructcor

data (-) :: * - * - *

is fine for haskell 98, but breaks when you have unboxed values, so
jhc used the same solution of ghc, making it

data (-) :: ?? - ? - *

where ?? means * or #, and ? means *, #, or (#), I will call these
quasi-kinds for the moment.

all you need to express the typeable class is an additional quasi-kind
'ANY' that means, well, any kind.

then you can declare proxy as

data Proxy (a :: ANY) = Proxy

and it will act identially to the ghc version.

So why existential?

because ANY is just short for 'exists k. k'

so Proxy ends up with

Proxy :: (exists k . k) - *

which is isomorphic to

forall k . Proxy :: k - *

? expands to (exists k . FunRet k = k) and ?? expands to (exists k .
FunArg k = k)  where FunRet and FunArg are appropriate constraints on
the type.

so the quasi-kinds are not any sort of magic hackery, just simple
synonyms for existential kinds.

The implemention is dead simple for any unification based kind
checker, normally when you find a constructor application, you unify
each of the arguments kinds with the kinds given by the constructor,
the only difference is that if the kind of the constructor is 'ANY'
you skip unifying it with anything, or create a fresh kind variable
with a constraint if you need to support ?,and ?? too and unify it
with that. about a 2 line change to your kind inference code.

From the users perspective, the Proxy will behave the same as ghcs,
the only difference is that I need the 'ANY' annotation when declaring
the type as such kinds are never automatically infered at the moment.
I may just support the 'exists k . k' syntax directly in kind
annotations actually eventually, I support it for types and it is
handy on occasion.

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Changes to Typeable

2012-02-11 Thread John Meacham
typo, I meant

Proxy :: (exists k . k) - *  is isomorphic to Proxy :: forall k . k - *

   John

On Sat, Feb 11, 2012 at 6:02 PM, John Meacham j...@repetae.net wrote:
 On Fri, Feb 10, 2012 at 2:24 PM, Ian Lynagh ig...@earth.li wrote:
 But it would be better if they could use the new definition. Is
 PolyKinds sufficiently well-defined and simple that it is feasible for
 other Haskell implementations to implement it?

 There is actually a much simpler extension I have in jhc called
 'existential kinds' that can
 express this.

 Existential kinds grew from the need to express the kind of the
 function type constructcor

 data (-) :: * - * - *

 is fine for haskell 98, but breaks when you have unboxed values, so
 jhc used the same solution of ghc, making it

 data (-) :: ?? - ? - *

 where ?? means * or #, and ? means *, #, or (#), I will call these
 quasi-kinds for the moment.

 all you need to express the typeable class is an additional quasi-kind
 'ANY' that means, well, any kind.

 then you can declare proxy as

 data Proxy (a :: ANY) = Proxy

 and it will act identially to the ghc version.

 So why existential?

 because ANY is just short for 'exists k. k'

 so Proxy ends up with

 Proxy :: (exists k . k) - *

 which is isomorphic to

 forall k . Proxy :: k - *

 ? expands to (exists k . FunRet k = k) and ?? expands to (exists k .
 FunArg k = k)  where FunRet and FunArg are appropriate constraints on
 the type.

 so the quasi-kinds are not any sort of magic hackery, just simple
 synonyms for existential kinds.

 The implemention is dead simple for any unification based kind
 checker, normally when you find a constructor application, you unify
 each of the arguments kinds with the kinds given by the constructor,
 the only difference is that if the kind of the constructor is 'ANY'
 you skip unifying it with anything, or create a fresh kind variable
 with a constraint if you need to support ?,and ?? too and unify it
 with that. about a 2 line change to your kind inference code.

 From the users perspective, the Proxy will behave the same as ghcs,
 the only difference is that I need the 'ANY' annotation when declaring
 the type as such kinds are never automatically infered at the moment.
 I may just support the 'exists k . k' syntax directly in kind
 annotations actually eventually, I support it for types and it is
 handy on occasion.

    John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Changes to Typeable

2012-02-10 Thread John Meacham
Would it be useful to make 'Proxy' an unboxed type itself? so

Proxy :: forall k . k - #

This would statically ensure that no one accidentally passes ⊥ as a parameter
or will get anything other than the unit 'Proxy' when trying to evaluate it.
So the compiler can unconditionally elide the parameter at runtime. Pretty
much exactly how State# gets dropped which has almost the same definition.

something I have thought about is perhaps a special syntax for Proxy, like
{:: Int - Int } is short for (Proxy :: Proxy (Int - Int)). not sure whether
that is useful enough in practice though, but could be handy if we are throwing
around types a lot.

   John


On Fri, Feb 10, 2012 at 8:03 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 Friends

 The page describes an improved implementation of the Typeable class, making 
 use of polymorphic kinds. Technically it is straightforward, but it 
 represents a non-backward-compatible change to a widely used library, so we 
 need to make a plan for the transition.

        http://hackage.haskell.org/trac/ghc/wiki/GhcKinds/PolyTypeable

 Comments?  You can fix typos or add issues directly in the wiki page, or 
 discuss by email

 Simon


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] [Haskell-cafe] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-09 Thread John Meacham
On Wed, Feb 8, 2012 at 10:56 AM, Ian Lynagh ig...@earth.li wrote:
 That sounds right. It basically means you don't have to write the C
 stubs yourself, which is nice because (a) doing so is a pain, and (b)
 when the foreign import is inside 2 or 3 CPP conditionals it's even more
 of a pain to replicate them correctly in the C stub.

 Unfortunately, there are cases where C doesn't get all the type
 information it needs, e.g.:
    http://hackage.haskell.org/trac/ghc/ticket/2979#comment:14
 but I'm not sure what the best fix is.

I believe jhc's algorithm works in this case. Certain type constructors have C
types associated with them, in particular, many newtypes have c types that are
different than their contents. So my routine that finds out whether an argument
is suitable for FFIing returns both a c type, and the underlying raw type (Int#
etc..) that the type maps to. So the algorithm checks if the current type
constructor has an associated C type, if it doesn't then it expands the newtype
one layer and trys again, however if it does have a c type, it still recurses
to get at the underlying raw type, but then replaces the c type with whatever
was attached to the newtype. In the case of 'Ptr a' it recursively runs the
algorithm on the argument to 'Ptr', then takes that c type and appends
a '*' to it.
If the argument to 'Ptr' is not an FFIable type, then it just returns
HsPtr as the C type.

Since CSigSet has sigset_t associated with it, 'Ptr CSigSet' ends up turning
into 'sigset_t *' in the generated code. (Ptr (Ptr CChar)) turns into char**
and so forth.

An interesting quirk of this scheme is that it faithfully translates the
perhaps unfortunate idiom of

newtype Foo_t = Foo_t (Ptr Foo_t)

into  foo_t (an infinite chain of pointers)

which is actually what the user specified. :) I added a check for recursive
newtypes that chops the recursion to catch this as people seem to utilize it.

 I ask because jhc needs such a feature (very hacky method used now,
 the rts knows some problematic functions and includes hacky wrappers
 and #defines.) and I'll make it behave just like the ghc one when possible.

 Great!

It has now been implemented, shall be in jhc 0.8.1.

John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] ANNOUNCE: jhc-0.8.0

2012-02-09 Thread John Meacham
Thanks. I have included your patch.

However the ultimate root cause of the Options.hs is not that it isn't
being included, it is under drift_processed/Options.hs, the issue is
that the Makefile still seems to think it needs the original at least
for you for some reason and isn't using the drift_procesed one. hmm..

John

On Thu, Feb 9, 2012 at 10:49 AM, Sergei Trofimovich sly...@gmail.com wrote:
 On Tue, 7 Feb 2012 20:03:20 -0800
 John Meacham j...@repetae.net wrote:

 I am happy to announce jhc 0.8.0

 There have been A lot of major changes in jhc with this release.

  - http://repetae.net/computer/jhc

 Hi John!

 Release tarball is missing  src/Options.hs file. Sent a patch via 'darcs 
 send'.

 ] hunk ./Makefile.am 30
 -       src/Name/Prim.hs src/PackedString.hs src/Ho/ReadSource.hs \
 +       src/Name/Prim.hs src/Options.hs src/PackedString.hs 
 src/Ho/ReadSource.hs \

 --

  Sergei

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] [Haskell-cafe] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-09 Thread John Meacham
On Thu, Feb 9, 2012 at 11:23 AM, Ian Lynagh ig...@earth.li wrote:
 On Thu, Feb 09, 2012 at 04:52:16AM -0800, John Meacham wrote:

 Since CSigSet has sigset_t associated with it, 'Ptr CSigSet' ends up 
 turning
 into 'sigset_t *' in the generated code. (Ptr (Ptr CChar)) turns into char**
 and so forth.

 What does the syntax for associating sigset_t with CSigSet look like?

There currently isn't a user accessable once, but CSigSet is included in the
FFI spec so having the complier know about it isn't that bad. In fact, it is
how I interpreted the standard. Otherwise, why would CFile be specified if it
didn't expand 'Ptr CFile' properly. I just have a single list of associations
that is easy to update at the moment, but a user defineable way is something i
want in the future. My current syntax idea is.

data CFile = foreign stdio.h FILE

but it doesn't extend easily to 'newtype's
or maybe a {-# CTYPE FILE #-} pragma...

The 'Ptr' trick is useful for more than just pointers, I use the same thing to
support native complex numbers. I have

data Complex_ :: # - #  -- type function of unboxed types to unboxed types.

then can do things like 'Complex_ Float64_' to get hardware supported complex
doubles. The expansion happens just like 'Ptr' except instead of postpending
'*' when it encounters _Complex, it prepends '_Complex ' (a C99 standard
keyword).

You can then import primitives like normal (for jhc)

foreign import primitive Add complexPlus ::
Complex_ Float64_ - Complex_ Float64_ - Complex_ Float64_

and lift it into a data type and add instances for the standard numeric classes
if you wish. (I have macros that automate the somewhat repetitive instance
creation in lib/jhc/Jhc/Num.m4)

John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Proposal: require spaces around the dot operator

2012-02-09 Thread John Meacham
I mean, it is not worth worrying about the syntax until the extension has been
 implemented, used, and proven useful to begin with. Monads were in use
well before the 'do' notation. Shaking out what the base primitives that make
up a monad took a while to figure out.

Even discussing syntax feels a little like a garage band discussing what
the lighting of their  stage show will look like before they learned to play
their instruments.

If we can implement it and test it without breaking existing code, why
wouldn't we? It would mean more people can experiment with the
feature because they wouldn't have to modify existing code much. So
we will have more feedback and experience with how it interacts with
other aspects of the language.

John

On Thu, Feb 9, 2012 at 6:41 PM, Greg Weber g...@gregweber.info wrote:
 There are 2 compelling reasons I know of to prefer dot for record access
 1) follows an almost universal convention in modern programming languages
 2) is consistent with using the dot to select functions from module 
 name-spaces

 We can have a lot of fun bike-shedding about what operator we would
 prefer were these constraints not present. Personally I wouldn't care.
 However, I find either one of these 2 points reason enough to use the
 dot for record field access, and even without a better record system
 the second point is reason enough to not use dot for function
 composition.

 It is somewhat convenient to argue that it is too much work and
 discussion for something one is discussing against. The only point
 that should matter is how existing Haskell code is effected.

 On Thu, Feb 9, 2012 at 8:27 PM, Daniel Peebles pumpkin...@gmail.com wrote:
 I'm very happy to see all the work you're putting into the record
 discussion, but I'm struggling to see why people are fighting so hard to get
 the dot character in particular for field access. It seems like a huge
 amount of work and discussion for a tiny bit of syntactic convenience that
 we've only come to expect because of exposure to other very different
 languages.

 Is there some fundamental reason we couldn't settle for something like # (a
 valid operator, but we've already shown we're willing to throw that away in
 the MagicHash extension) or @ (only allowed in patterns for now)? Or we
 could even keep (#) as a valid operator and just have it mean category/lens
 composition.

 Thanks,
 Dan

 On Thu, Feb 9, 2012 at 9:11 PM, Greg Weber g...@gregweber.info wrote:

 Similar to proposal #20, which wants to remove it, but immediately
 less drastic, even though the long-term goal is the same.
 This helps clear the way for the usage of the unspaced dot as a record
 field selector as shown in proposal #129.

 After this proposal shows clear signs of moving forward I will add a
 proposal to support a unicode dot for function composition.
 After that we can all have a lively discussion about how to fully
 replace the ascii dot with an ascii alternative such as ~ or 
 After that we can make the dot operator illegal by default.

 This has already been discussed as part of a records solution on the
 ghc-users mail list and documented here:
 http://hackage.haskell.org/trac/ghc/wiki/Records/DotOperator

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime



 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] [Haskell] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-09 Thread John Meacham
On Wed, Feb 8, 2012 at 10:56 AM, Ian Lynagh ig...@earth.li wrote:
 That sounds right. It basically means you don't have to write the C
 stubs yourself, which is nice because (a) doing so is a pain, and (b)
 when the foreign import is inside 2 or 3 CPP conditionals it's even more
 of a pain to replicate them correctly in the C stub.

 Unfortunately, there are cases where C doesn't get all the type
 information it needs, e.g.:
    http://hackage.haskell.org/trac/ghc/ticket/2979#comment:14
 but I'm not sure what the best fix is.

I believe jhc's algorithm works in this case. Certain type constructors have C
types associated with them, in particular, many newtypes have c types that are
different than their contents. So my routine that finds out whether an argument
is suitable for FFIing returns both a c type, and the underlying raw type (Int#
etc..) that the type maps to. So the algorithm checks if the current type
constructor has an associated C type, if it doesn't then it expands the newtype
one layer and trys again, however if it does have a c type, it still recurses
to get at the underlying raw type, but then replaces the c type with whatever
was attached to the newtype. In the case of 'Ptr a' it recursively runs the
algorithm on the argument to 'Ptr', then takes that c type and appends
a '*' to it.
If the argument to 'Ptr' is not an FFIable type, then it just returns
HsPtr as the C type.

Since CSigSet has sigset_t associated with it, 'Ptr CSigSet' ends up turning
into 'sigset_t *' in the generated code. (Ptr (Ptr CChar)) turns into char**
and so forth.

An interesting quirk of this scheme is that it faithfully translates the
perhaps unfortunate idiom of

newtype Foo_t = Foo_t (Ptr Foo_t)

into  foo_t (an infinite chain of pointers)

which is actually what the user specified. :) I added a check for recursive
newtypes that chops the recursion to catch this as people seem to utilize it.

 I ask because jhc needs such a feature (very hacky method used now,
 the rts knows some problematic functions and includes hacky wrappers
 and #defines.) and I'll make it behave just like the ghc one when possible.

 Great!

It has now been implemented, shall be in jhc 0.8.1.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cannot understand liftM2

2012-02-09 Thread John Meacham
A good first step would be understanding how the other entry works:

cartProd :: [a] - [b] - [(a,b)]
cartProd xs ys = do
x - xs
y - ys
return (x,y)

It is about halfway between the two choices.

John

On Thu, Feb 9, 2012 at 9:37 AM, readams richard.ad...@lvvwd.com wrote:
 Nice explanation.  However, at
 http://stackoverflow.com/questions/4119730/cartesian-product it was pointed
 out that this

 cartProd :: [a] - [b] - [(a, b)]
 cartProd = liftM2 (,)

 is equivalent to the cartesian product produced using a list comprehension:

 cartProd xs ys = [(x,y) | x - xs, y - ys]

 I do not see how your method of explanation can be used to explain this
 equivalence?  Nevertheless, can you help me to understand how liftM2 (,)
 achieves the cartesian product?  For example,

 Prelude Control.Monad.Reader liftM2 (,) [1,2] [3,4,5]
 [(1,3),(1,4),(1,5),(2,3),(2,4),(2,5)]

 Thank you!

 --
 View this message in context: 
 http://haskell.1045720.n5.nabble.com/Cannot-understand-liftM2-tp3085649p5470185.html
 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-09 Thread John Meacham
On Thu, Feb 9, 2012 at 11:23 AM, Ian Lynagh ig...@earth.li wrote:
 On Thu, Feb 09, 2012 at 04:52:16AM -0800, John Meacham wrote:

 Since CSigSet has sigset_t associated with it, 'Ptr CSigSet' ends up 
 turning
 into 'sigset_t *' in the generated code. (Ptr (Ptr CChar)) turns into char**
 and so forth.

 What does the syntax for associating sigset_t with CSigSet look like?

There currently isn't a user accessable once, but CSigSet is included in the
FFI spec so having the complier know about it isn't that bad. In fact, it is
how I interpreted the standard. Otherwise, why would CFile be specified if it
didn't expand 'Ptr CFile' properly. I just have a single list of associations
that is easy to update at the moment, but a user defineable way is something i
want in the future. My current syntax idea is.

data CFile = foreign stdio.h FILE

but it doesn't extend easily to 'newtype's
or maybe a {-# CTYPE FILE #-} pragma...

The 'Ptr' trick is useful for more than just pointers, I use the same thing to
support native complex numbers. I have

data Complex_ :: # - #  -- type function of unboxed types to unboxed types.

then can do things like 'Complex_ Float64_' to get hardware supported complex
doubles. The expansion happens just like 'Ptr' except instead of postpending
'*' when it encounters _Complex, it prepends '_Complex ' (a C99 standard
keyword).

You can then import primitives like normal (for jhc)

foreign import primitive Add complexPlus ::
Complex_ Float64_ - Complex_ Float64_ - Complex_ Float64_

and lift it into a data type and add instances for the standard numeric classes
if you wish. (I have macros that automate the somewhat repetitive instance
creation in lib/jhc/Jhc/Num.m4)

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] [Haskell-cafe] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-07 Thread John Meacham
On Tue, Feb 7, 2012 at 4:24 AM, Simon Marlow marlo...@gmail.com wrote:
 Separately the unix package added support for undecoded FilePaths
 (RawFilePath), but unfortunately at the same time we started using a new
 extension in GHC 7.4.1 (CApiFFI), which we decided not to document because
 it was still experimental:

Hi, from my reading, it looks like 'capi' means from a logical perspective,

Don't assume the object is addressible, but rather that the standard c syntax
for calling this routine will expand into correct code when compiled with the
stated headers

So, it may be implemented by say creating a stub .c file that includes the
 headers and creates a wrapper around each one or when compiling via C,
actually including the given headers and the function calls in the code.

I ask because jhc needs such a feature (very hacky method used now,
the rts knows some problematic functions and includes hacky wrappers
and #defines.) and I'll make it behave just like the ghc one when possible.

   John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] ANNOUNCE: jhc-0.8.0

2012-02-07 Thread John Meacham
I am happy to announce jhc 0.8.0

There have been A lot of major changes in jhc with this release.

 - http://repetae.net/computer/jhc

- A brand new and sanified library description file format. Now it is a true
  YAML file. The previous quasi-cabal files are supported but deprecated.

- new standard library 'bytestring' :)

- jhc can now embed C code and header files directly into hl libraries. The
  code will automatically be unpacked and linked when needed transparently.
  This allows 'bytestring' to carry around its fpsstring.c low level
  implementaton for instance without having to coordinate the install of C
  headers/code along with the jhc haskell library.

- the library description files now can affect pretty much any aspect of
  compilation, so it is much easier to make a self-contained build or for tools
  like configure/cabal to interface to jhc. They just need to spit out an
  appropriate yaml file rather than care about passing the right options to jhc
  in addition to creating a description file. See the user manual for the set
  of fields that can be set.

- jhc now understands ghc-style LANGUAGE pragmas and -X options and will
  translate them into the appropriate jhc extensions as needed.

- jhc transparently handles '.hsc' files (assuming you have hsc2hs installed).
  It even does the right thing when cross compiliing and will use the target
  architectures types rather than the hosts.

- the bang patterns extension has been implemented.

- the primtive libraries have been re-done, now there is a truely absolutely
  minimal jhc-prim library that is everything that must exist for jhc to work
  but brinsg in no code on its own. Everything else is implemented in user
  replaceable normal haskell code.

  Want to create a varient of haskell that has 16 bit Ints, ASCII 8 bit Chars
  and pervasively uses null terminated C style strings? just create a base-tiny
  library and link it against jhc-prim.

- The standalone deriving extension has been partially implemented, this
  greatly improves the ability to re-arrange the libraries logically.

- Better haskell 2010 compatibility.

- Haskell object files and the --ho-dir have been fully removed in favor of
  pervasive use of the code cache. This allows jhc to cache compilation results
  with finer granularity than individual files would allow. Command line
  options have been renamed accordingly, we have --cache-dir, --no-cache,
  --readonly-cache and --ignore-cache to modify jhcs behavior with respect to
  the cache. Additionally a --purge-cache option has been added.

- rather than trying to stuff everything in a monolithic .c file, jhc now
  builds a tree containing all the source needed to compile the final
  executable or shared library. This will help people porting jhc to new
  architectures and allows the embedding of c code in the haskell libraries.

- Many programs typecheked in the past that were in fact, invalid. most now
  fail properly with an informative message.

- pulled in most of the ghc typechecking regression tests.

- internal class representation completely re-worked.

- unboxed characters now supported, with the obvious syntax.


 John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] [Haskell] ANNOUNCE: system-filepath 0.4.5 and system-fileio 0.3.4

2012-02-07 Thread John Meacham
On Tue, Feb 7, 2012 at 4:24 AM, Simon Marlow marlo...@gmail.com wrote:
 Separately the unix package added support for undecoded FilePaths
 (RawFilePath), but unfortunately at the same time we started using a new
 extension in GHC 7.4.1 (CApiFFI), which we decided not to document because
 it was still experimental:

Hi, from my reading, it looks like 'capi' means from a logical perspective,

Don't assume the object is addressible, but rather that the standard c syntax
for calling this routine will expand into correct code when compiled with the
stated headers

So, it may be implemented by say creating a stub .c file that includes the
 headers and creates a wrapper around each one or when compiling via C,
actually including the given headers and the function calls in the code.

I ask because jhc needs such a feature (very hacky method used now,
the rts knows some problematic functions and includes hacky wrappers
and #defines.) and I'll make it behave just like the ghc one when possible.

   John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] -XPolyKinds and undefined arguments

2012-02-07 Thread John Meacham
Can't you do something like have the kind be unlifted? for instance

data Proxy (a :: #)

data Type1 :: #
data Type2 :: #

 John


On Tue, Feb 7, 2012 at 12:19 PM, Douglas McClean
douglas.mccl...@gmail.com wrote:
 There are all sorts of useful functions that would otherwise require
 explicit type applications which we work around by passing undefined and
 ignoring its value. (See
 http://www.haskell.org/pipermail/libraries/2010-October/014742.html for a
 prior discussion of why this is, how the use of undefined for something that
 is perfectly well defined smells, and various suggestions about one day
 changing it.)

 Examples: Data.Typeable.typeOf, Foreign.Storable.sizeOf, etc.

 It seems to me that the new -XPolyKinds extension might provide a good
 long-term solution to statically insuring that arguments like this are
 always undefined and are never inspected.

 If we had a library definition:

   -- a kind-indexed family of empty types, Proxy :: AnyK - *
   data Proxy a

 then we could write things that resemble type applications such as sizeOf
 @Int64. Unlike the current approach, we'd be assured that nobody tries to
 use the value of such a proxy in any meaningful way, since you can't pattern
 match it or pass a Proxy t in a context expecting a t. We'd have formal
 documentation of which parameters are in this
 value-is-unused-and-only-type-matters category (which might help us erase
 more often?).

 This is essentially the same Proxy type as in section 2.5 of the Giving
 Haskell a Promotion paper, except that by making it empty instead of
 isomorphic to () we end up with a type that only has one value (bottom)
 instead of two (Proxy and bottom), which should be a desirable property if
 we are trying to erase uses and forbid pattern matches and such, I think.

 Adding a syntax extension (or even some TH?) to make something like @Int64
 desugar to undefined :: Proxy Int64 would eliminate the clutter/smell from
 the undefineds.

 Poly kinds allow us to use the same mechanism where the undefined
 parameter is a phantom type at a kind other than *. It seems to work nicely,
 and it cleaned up some of my code by unifying various proxy types I had made
 specialized at kinds like Bool.

 How would you compare this to some of the proposals in the 2010 thread?
 Would people still prefer their fantasy-Haskell-of-the-future to include
 full-blown support for explicit type applications instead?


 -Doug

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Impredicative types error

2012-01-31 Thread John Meacham
Hi, I am running into an issue where some code that compiled and
worked under 6.12 is failing under 7.0, the offending code is

class DeNameable a where
deName :: Module - a - a

getDeName :: Tc (DeNameable n = n - n)
getDeName = do
mn - asks (tcInfoModName . tcInfo)
return (\n - deName mn n)

Tc is a plain typechecking monad and this returns a generic denaming
function that can be used to turn haskell names back into human
readable form before printing errors in jhc.

I have the ImpredicativeTypes LANGUAGE feature turned on.

the error I get under 7.0 is

src/FrontEnd/Tc/Monad.hs:131:29:
Couldn't match expected type `n - n'
with actual type `DeNameable n'
In the second argument of `deName', namely `n'
In the expression: deName mn n
In the first argument of `return', namely `(\ n - deName mn n)'

John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] where to put general-purpose utility functions

2012-01-25 Thread John Meacham
That is one of the wonderful things about haskell, most languages have
a negative correlation between codesize and productivity, however with
haskell there is a strong positive correlation. You can re-use so much
that as your code base grows it becomes easier to add new features
rather than harder.

I solve that problem by having a darcs repository for all my utility
modules, then when I want to use them in a project, I just 'darcs
pull' them into the current repository and they appear. as long as a
patch doesn't cross the boundry between the utility modules and the
rest of the codebase I can push changes upstream to the utility
repository from my specific project, or pull in new improvements to
the utility functions. I can also make local modifications to the
utility modules and just not push the local modifications upstream if
needed. The cool thing is that the darcs history has the merged
history of my projects, and tagging with a version number or release
will snapshot both your program and the exact version of the utility
routines used at that time.

The ability to have multiple sibling darcs repositories is a really
powerful feature. quite handy.

I had a similar system before darcs where I used a shared RCS
directory between projects, but that wasn't nearly as seamless or
integrated.

Plus it makes it easier when collaborating, as far as anyone is
concerned, the project is a single darcs repository they can get and
build and create patches for. The fact that behind the scenes it is
actually a collection of repositories spawning patches. modifying
them, and exchanging them in a bizarre parody of bacterial plasmid
exchange* is completely transparent.

John

* stolen prose, I just liked the sound of it.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-19 Thread John Meacham
 As expected, no warnings. But if I change this unfailable code above
 to the following failable version:

    data MyType = Foo | Bar

    test myType = do
        Foo - myType
        return ()

 I *still* get no warnings! We didn't make sure the compiler spits out
 warnings. Instead, we guaranteed that it *never* will.

This is actually the right useful behavior. using things like

do
   Just x - xs
   Just y - ys
   return (x,y)

will do the right thing, failing if xs or ysresults in Nothing. for
instance, in the list monad, it will create the cross product of the
non Nothing members of the two lists. a parse monad may backtrack and
try another route, the IO monad will create a useful (and
deterministic/catchable) exception pointing to the exact file and line
number of the pattern match. The do notation is the only place in
haskell that allows us to hook into the pattern matching mechanism of
the language in a general way.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANNOUNCE: jhc-0.7.8

2012-01-18 Thread John Meacham
Announcing jhc 0.7.8! This is mainly a bug fix release.

  http://repetae.net/computer/jhc/

Changes include:

 * Now compiles under ghc 7.0.x as well as 6.12
 * new standard libraries
* filepath
* deepseq
 * new platforms supported
Nintendo DSi, GBA, and GP32 (thanks to Brian McKenna)
 * many bug fixes for various reported issues. see darcs logs for details.
 * LIB_OPTIONS respected when building external libraries
 * better float outward optimization
 * better diagnostics/dumping of core/grin on internal type errors
 * rewrite UnionSolve to be much faster, node analysis sped up accordingly.

John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] black Wikipedia

2012-01-18 Thread John Meacham
Not to mention ebay, craigslist, etc..
http://www.techdirt.com/articles/20111005/10082416208/monster-cable-claims-ebay-craigslist-costco-sears-are-rogue-sites.shtml

when there is no burden of proof for someone to take down a site then
things get very complicated.

for instance this package could be enough to get all of hackage taken
down since astrolabe decided they own timezone data[1].

http://hackage.haskell.org/package/timezone-olson-0.1.2

in fact, SOPA and PIPA would make hackage pretty impossible to legally
host. Unless the hackage maintainers want to do exhaustive patent and
copyright searches on all uploaded code before they allow it to be
posted.

[1] 
http://www.techdirt.com/articles/20111006/11532316235/astrolabe-claims-it-holds-copyright-timezone-data-sues-maintainers-public-timezone-database.shtml

   John



On Wed, Jan 18, 2012 at 10:17 AM, Brandon Allbery allber...@gmail.com wrote:
 On Wed, Jan 18, 2012 at 13:11, Henning Thielemann
 lemm...@henning-thielemann.de wrote:

 On Wed, 18 Jan 2012, Andrew Butterfield wrote:

 Just add ?banner=none to the url if you really have to read the page


 Maybe the intention was to demonstrate that censorship (in this case
 self-censorship) is mostly a problem for average users but not for advanced
 users.


 There isn't going to be a disable-javascript or ?banner hack when anyone
 anywhere can force a website to be redirected to some DOJ page without
 providing any proof.  (Yes, really.)

 --
 brandon s allbery                                      allber...@gmail.com
 wandering unix systems administrator (available)     (412) 475-9364 vm/sms


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] black Wikipedia

2012-01-18 Thread John Meacham
And such a thing can take months or years for the courts to figure
out, and unless your free site has a lawyer to fight for your side,
under SOPA/PIPA you can be down the entire time with little recourse.
For anyone hosting content lke hackage, github, etc. when you have
thousands of packages, someone somewhere is going to be upset by
something and will be able to take the site down. _regardless of the
merit of their case_ the site will go down as they figure it out. Not
only that, they would be able to take the site down if it contains a
link to an objectionable site. for instance, if one of the homepage
fields in some cabal file  somewhere pointed to a site that someone
took offense too on it. we would not only be obligated to patrol the
code uploaded, but the targets of any urls within said
code/description... and retroactively remove stuff if said links
change to contain objectional material. (for a very vauge definition
of objectionable). it is a really messed up law.

John

On Wed, Jan 18, 2012 at 2:46 PM, Hans Aberg haber...@telia.com wrote:
 On 18 Jan 2012, at 23:11, Brandon Allbery wrote:

 There is the Beastie Boys case, where the judge decided copyright protects 
 what is creatively unique.

 But such judgments are rare, sadly.  And for every Beastie Boys case there's 
 at least one The Verve case.

 I did not know that. But it was a UK case, wasn't it? - UK copyright laws are 
 a lot more tight.

 Hans



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] black Wikipedia

2012-01-18 Thread John Meacham
However the fallout is likely to destroy both open source and resale
on the internet.

For instance, the existence of this is enough to get hackage a
takedown under SOPA.
http://hackage.haskell.org/package/conjure

now, you might say we can just move hackage out of the US, but then
any site that _links_ to hackage from within the US will then be
subject to takedown from within the US, and any US based search engine
would be unable to index hackage or return results to it, until
hackage hired a lawyer to prove they don't fascilitate piracy. And I
am not even sure they would win, providing a bittorrent client is
fascilitting piracy because it can be used as a piratebay client.
supporting piracy is transitive under SOPA. think freshmeat.net,
slashdot.org, github, basically any site that links to user content
can be shut down. And haskell.org won't be able to link to it without
also falling prey to SOPA. it's transitive.

Not only that, but the proponents are not just hollywood, it is anyone
that feels they will have an advantage with the ability to bully
internet sites. For instance, monster cable is a huge supporter and
they have a history of suing any site that posts bad reviews of their
products or anyone that uses the words 'monster' or 'cable'. under
SOPA they could just get the sites they want shut down until they
capitulate. Silicon Valley need not fear this sort of thing too much
as they can bite back with lawyers of their own, but independent sites
will find themselves shut off or delisted and sites linking to them
shut down.

John

On Wed, Jan 18, 2012 at 3:42 PM, Hans Aberg haber...@telia.com wrote:
 Actually, it is a battle between the Hollywood and Silicon Valley industries.

 Hans


 On 19 Jan 2012, at 00:11, John Meacham wrote:

 And such a thing can take months or years for the courts to figure
 out, and unless your free site has a lawyer to fight for your side,
 under SOPA/PIPA you can be down the entire time with little recourse.
 For anyone hosting content lke hackage, github, etc. when you have
 thousands of packages, someone somewhere is going to be upset by
 something and will be able to take the site down. _regardless of the
 merit of their case_ the site will go down as they figure it out. Not
 only that, they would be able to take the site down if it contains a
 link to an objectionable site. for instance, if one of the homepage
 fields in some cabal file  somewhere pointed to a site that someone
 took offense too on it. we would not only be obligated to patrol the
 code uploaded, but the targets of any urls within said
 code/description... and retroactively remove stuff if said links
 change to contain objectional material. (for a very vauge definition
 of objectionable). it is a really messed up law.

    John

 On Wed, Jan 18, 2012 at 2:46 PM, Hans Aberg haber...@telia.com wrote:
 On 18 Jan 2012, at 23:11, Brandon Allbery wrote:

 There is the Beastie Boys case, where the judge decided copyright 
 protects what is creatively unique.

 But such judgments are rare, sadly.  And for every Beastie Boys case 
 there's at least one The Verve case.

 I did not know that. But it was a UK case, wasn't it? - UK copyright laws 
 are a lot more tight.

 Hans




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splitting off many/some from Alternative

2011-12-12 Thread John Meacham
Yes, they are major pains for frisby, which is a parser but needs to
be cleverer about recursion, the many and some that come with
applicative actually cause infinite loops.

John

On Sun, Dec 11, 2011 at 9:18 PM, Gregory Crosswhite
gcrosswh...@gmail.com wrote:
 Hey everyone,

 I am sure that it is too late to do more than idly speculate about this, but
 could we split the some/many methods out from Alternative?  They simply
 don't make sense except in a subset of possible Alternatives --- in most
 cases they just result in an infinite loop.  That is to say, it is not just
 that they are extraneous functionality, but most of the time they are
 *impossible* functionality to even implement!  In a way, it is a lie to be
 including them in Alternative since people making use of the class might
 incorrectly (but quite reasonably!) assume that any type that is an instance
 of Alternative *has* a well-defined some and many method, when this is
 actually the exception rather than the rule.

 It is only recently that I have been able to grok what some and many are
 even about (I think), and they seem to only make sense in cases where
 executing the Alternative action results in a portion of some input being
 consumed or not consumed.  some v means consume at least one v and return
 the list of items consumed or fail, and many v means consume zero or
 more v and return the list of items consumed or the empty list of none are
 consume.  It thus makes sense for there to be some subclass of Alternative
 called something like Consumptive that contains these methods.  The
 majority of Alternative instances would no longer have these methods, and
 again that would actually be an improvement since in such cases some/many
 were unworkable and did nothing but cause infinite loops anyway.

 Normally it would be unthinkable to even consider such a change to the base
 libraries, but I suspect that there are not more than a couple of packages
 out there that make active use of the some/many instances of Alternative;
  it is really only parser packages that need some/many, and users most
 likely use the versions included with the packages themselves rather than
 the Alternative version.  Could we verify that this is the case, and if so
 split them away?  I thought I heard a trick whereby people were able to grep
 all the source on Hackage, but I can't remember what it was.  :-)

 Just a thought.  :-)

 Thanks,
 Greg

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Revert a CAF?

2011-12-06 Thread John Meacham
Can you use a weak pointer to do what you want?
If you keep a weak pointer to the head of your expensive list then
itwill be reclaimed at the next major GC I believe. I have used
weakpointers for vaugely similar purposes before.
I guess a downside is that they will always be reclaimed on GC even
ifthere isn't memory pressure, I think.

   John
(resent, accidentally sent to wrong address first)

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] A Mascot

2011-11-15 Thread John Meacham
People tend to concentrate on the lambda which cooresponds to the
functional aspect of haskell when designing logos. Not nearly enough
attention is paid to the other striking feature, the laziness. The
'bottom' symbol _|_ should feature prominently. The two most defining
features of haskell are that it is purely functional and _|_ inhabits
every type. The combination of which is very powerful.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: case of an empty type should have no branches

2011-10-10 Thread John Meacham
What are you trying to acomplish? A case doesn't necessarily force
evaluation in haskell depending on the binding pattern. for instance

case x of _ - undefined will parse, but the function is still lazy in
x. it is exactly equivalant to

quodlibet x = undefined

If you want to actually enforce that quodlibet _|_ evaluates to _|_
then you want quodlibet x = x `seq` undefined. Though, that too is
technically equivalent in a theoretical sense, but may have practical
benefits when it comes to error messages depending on what you are
trying to acomplish.

  John

On Sun, Oct 9, 2011 at 4:26 AM, Roman Beslik ber...@ukr.net wrote:
 Hi.

 Why the following code does not work?
 data Empty
 quodlibet :: Empty - a
 quodlibet x = case x of
 parse error (possibly incorrect indentation)

 This works in Coq, for instance. Demand for empty types is not big, but they
 are useful for generating finite types:
 Empty ≅ {}
 Maybe Empty ≅ {0}
 Maybe (Maybe Empty) ≅ {0, 1}
 Maybe (Maybe (Maybe Empty)) ≅ {0, 1, 2}
 etc. Number of 'Maybe's = number of elements. I can replace @Maybe Empty@
 with @()@, but this adds some complexity.

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Why not Darcs?

2011-04-21 Thread John Meacham
Um, the patch theory is what makes darcs just work. There is no need
to understand it any more than you have to know VLSI design to
understand how your computer works. The end result is that darcs
repositories don't get corrupted and the order you integrate patches
doesn't affect things meaning cherrypicking is painless.

I think the main problem with patch theory is with its PR. It is a
super cool algorithm and rightly droundy should be proud of it so he
highlighted it. I think this caused people to think they had to
understand the patch theory rather than just sit back and enjoy it.

Incidentally, I wrote a github like site based around darcs a few
years ago at codehole.org. It is just used internally by me for
certain projects. but if people were interested, I could resume work
on it and make it public.

John

On Thu, Apr 21, 2011 at 3:16 PM, John Millikin jmilli...@gmail.com wrote:
 My chief complaint is that it's built on patch theory, which is
 ill-defined and doesn't seem particularly useful. The Bazaar/Git/Mercurial
 DAG model is much easier to understand and work with.

 Possibly as a consequence of its shaky foundation, Darcs is much slower than
 the competition -- this becomes noticeable for even very small repositories,
 when doing a lot of branching and merging.

 I think it's been kept alive in the Haskell community out of pure eat our
 dogfood instinct; IMO if having a VCS written in Haskell is important, it
 would be better to just write a new implementation of an existing tool. Of
 course, nobody cares that much about what language their VCS is written in,
 generally.

 Beyond that, the feeling I get of the three major DVCS alternatives is:

 git: Used by Linux kernel hackers, and Rails plugin developers who think
 they're more important than Linux kernel hackers

 hg/bzr: Used by people who don't like git's UI, and flipped heads/tails when
 picking a DVCS (hg and bzr are basically equivalent)

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strictness is observable

2011-04-01 Thread John Meacham
Error is not catchable in haskell 98. Only things thrown by raiseIO are.
On Apr 1, 2011 12:02 AM, o...@okmij.org wrote:
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Strictness is observable

2011-04-01 Thread John Meacham
On Fri, Apr 1, 2011 at 2:23 AM,  o...@okmij.org wrote:

 John Meacham wrote:
 Error is not catchable in haskell 98. Only things thrown by raiseIO are.

 I see; so GHC, absent any LANGUAGE pragma, should have arranged for
 `error' to generate a non-catchable exception.

Actually, it was because you imported Control.Exception. the catch/handle in
Control.Exception has different behavior than the catch in Prelude. one catches
imprecise exceptions, the other doesn't.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: specify call-by-need

2011-02-15 Thread John Meacham
Except for the fact that compilers don't actually implement call by
need. An example would be the speculative evaluation of ghc.

http://research.microsoft.com/en-us/um/people/simonpj/papers/optimistic/adaptive_speculation.ps

And local optimizations that affect asymptotic behavior are used all
the time, to the point they are vital for a functioning compiler. The
tail-call optimization turning O(n) space usage to O(1) being a prime
example.

And what is meant by call-by-need in the presence of exceptions and
concurrency is not entirely obvious.

I think that specifying call-by-need would be more confusing and
contrary to what actually exists in the wild.

   John


On Tue, Feb 15, 2011 at 5:53 PM, Scott Turner 2hask...@pkturner.org wrote:
 In practice, Haskell a call-by-need language.  Still, software
 developers are not on firm ground when they run into trouble with
 evaluation order, because the language definition leaves this open. Is
 this an underspecification that should be fixed?

  1. Haskell programmers learn the pitfalls of sharing as soon
     as they cut their teeth on 'fib',
  2. Virtually all significant-sized Haskell programs rely on
     lazy evaluation and have never been tested with another
     evaluation strategy,
  3. Questions come up on Haskell-Café, infrequently but regularly,
     regarding whether a compiler optimization has altered sharing
     of values within a program, causing it to fail,
  4. The rationale for the monomorphism restriction assumes
     lazy evaluation,
  5. It is the effect on asymptotic behavior that matters,
  6. Portable Haskell code should not have to allow for the
     variety of non-strict evaluation strategies, as the Haskell
     Report currently implies.

 I suggest specifying call-by-need evaluation, allowing for the places
 where type classes prevent this.  If necessary, make it clear that local
 optimizations not affecting asymptotic behavior are permitted.

 This would not eliminate struggles with evaluation order. The intent
 would be to clarify expectations.

 -- Scott Turner

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Injective type families?

2011-02-14 Thread John Meacham
Isn't this what data families (as opposed to type families) do?

John

On Mon, Feb 14, 2011 at 1:28 PM, Conal Elliott co...@conal.net wrote:
 Is there a way to declare a type family to be injective?

 I have

 data Z
 data S n

 type family n :+: m
 type instance Z   :+: m = m
 type instance S n :+: m = S (n :+: m)

 My intent is that (:+:) really is injective in each argument (holding the
 other as fixed), but I don't know how to persuade GHC, leading to some
 compilation errors like the following:

     Couldn't match expected type `m :+: n'
    against inferred type `m :+: n1'
   NB: `:+:' is a type function, and may not be injective

 I realize that someone could add more type instances for (:+:), breaking
 injectivity.

 Come to think of it, I don't know how GHC could even figure out that the two
 instances above do not overlap on the right-hand sides.

 Since this example is fairly common, I wonder: does anyone have a trick for
 avoiding the injectivity issue?

 Thanks,  - Conal

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ($) not as transparent as it seems

2011-02-03 Thread John Meacham
In general, errors are always interchangeable with another. An
exception in haskell is a value, rather than an event. Haskell
prescribes no evaluation order other than if the result is defined it
must be equivalant to the one generated by a normal-order reduction
strategy. Since error is not a valid value, any behavior including
just locking up is a completely acceptable (if not very friendly)
thing for a compiler to do.

In practice, we like writing compilers that help us find our errors
and using compilers that don't obfuscate them so compilers tend to
behave more or less like youd expect when presented with error, but
not at the expense of optimization or other necessary transformations.

 GHC has stronger guarentees in order to support its imprecise
exceptions extension in that the exceptional value returned is
guarenteed to be (non-deterministically) selected from the set of all
possible errors for every possible evaluation order of the expression.
So It won't just conjure up something new out of thin air, but neither
can you expect any particular exception when your code can produce
more than one.

John

On Thu, Feb 3, 2011 at 2:42 PM, Dan Doel dan.d...@gmail.com wrote:
 On Thursday 03 February 2011 5:12:54 PM Tim Chevalier wrote:
 On Thu, Feb 3, 2011 at 2:03 PM, Luke Palmer lrpal...@gmail.com wrote:
  This is probably a result of strictness analysis.  error is
  technically strict, so it is reasonable to optimize to:
 
     let e = error foo in e `seq` error e

 Yes, and you can see this in the Core code that Don posted: in version
 (A), GHC optimized away the outer call to error. But in version (B),
 the demand analyzer only knows that ($) is strict in its first
 argument -- it's not strict in its second. So it's not obviously safe
 to do the same optimization: the demand analyzer doesn't look
 through higher-order function arguments IIRC. (You can confirm this
 for yourself if you also want to read the demand analyzer output.)

 If ($) were getting inlined, the code would look the same coming into
 demand analysis in both cases, so you wouldn't see a difference. So
 I'm guessing you're compiling with -O0.

 Whatever is going on, it has to be active during ghci, because all these
 differences can be seen during interpretation (in 7.0.1, at least).

  Prelude error (error foo)
  *** Exception: foo
  Prelude error $ error foo
  *** Exception: *** Exception: foo
  Prelude let g :: (a - b) - a - b ; g f x = f x in g error (error foo)
  *** Exception: foo
  Prelude let g :: (a - b) - a - b ; g f x = f x
  Prelude g error (error foo)
  *** Exception: *** Exception: foo
  Prelude let foo = error foo in error foo
  *** Exception: foo
  Prelude let foo = error foo
  Prelude error foo
  *** Exception: *** Exception: foo

 Actually compiling seems to remove the difference in 7.0.1, at least, because
 the output is always:

  Foo: foo

 regardless of ($) or not ('fix error' hangs without output as well, which
 isn't what I thought would happen).

 Anyhow, that rules out most general-purpose optimizations (including
 strictness analysis, I thought).

 - Dan

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: OSX i386/x86 and x86_64 - time to switch supported platforms?

2011-02-01 Thread John Meacham
Even though the hardware is x86_64, I thought the vast majority of
macs used a 32 bit build of OSX and 32 bit programs?

 John

On Tue, Feb 1, 2011 at 3:38 AM, Max Cantor mxcan...@gmail.com wrote:
 The last 32-bit, Intel Mac was the Mac Mini, discontinued in August 2007. The 
 bulk of them were discontinued in 2006, along with PowerPC Macs.  Does it 
 make sense to relegate OSX x86_64 to community status while the 32-bit 
 version is considered a supported platform?

 Given that I'm far from experienced enough to be able to contribute 
 meaningfully to GHC, I'm not complaining about anyone's efforts, just that 
 those efforts might be a bit misallocated.  I'd venture a guess that far more 
 people are interested in running 64-bit GHC on OSX than in running GHC on 
 what is now fairly antiquated hardware.

 mc


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Kernel panic when building HEAD on OS X 10.6.4

2011-01-31 Thread John Meacham
Any chance a cooling fan inside died and you are overheating it?

Can you reproduce the failure with other heavy load programs, can you
run a widget that monitors the internal temperatures and other sensors
during the build?

It does seem odd...

John

On Mon, Jan 31, 2011 at 12:46 AM, Johan Tibell johan.tib...@gmail.com wrote:
 On Mon, Jan 31, 2011 at 4:33 AM, Manuel M T Chakravarty
 mchakrava...@mac.com wrote:
 Are you building inside a Parallels VM?  If so, it is probably a Parallels 
 bug (which also explain why compiling GHC can lead to a kernel panic).

 If the GHC build is not in a Parallels VM, I would suggest to shut down 
 Parallels and try again.  If the problem does not occur without Parallels 
 running, it's probably a bug in some of Parallel's virtualisation 
 infrastructure.

 I'm not running in a VM. I shut down parallels and also unloaded its
 kernel extensions by running:

 sudo launchctl stop com.parallels.vm.prl_naptd
 sudo launchctl stop com.parallels.desktop.launchdaemon
 sudo kextunload -b com.parallels.kext.prl_hypervisor
 sudo kextunload -b com.parallels.kext.prl_hid_hook
 sudo kextunload -b com.parallels.kext.prl_usb_connect
 sudo kextunload -b com.parallels.kext.prl_netbridge
 sudo kextunload -b com.parallels.kext.prl_vnic

 It still panics. Here's the crash log without the parallels extensions:

 =

 Interval Since Last Panic Report:  109637 sec
 Panics Since Last Report:          3
 Anonymous UUID:                    32C0D5F6-F2B2-4DB3-A784-F4B33799DE63

 Mon Jan 31 09:42:12 2011
 panic(cpu 1 caller 0x2a8ab2): Kernel trap at 0x0046d801, type 14=page
 fault, registers:
 CR0: 0x8001003b, CR2: 0x0018, CR3: 0x0010, CR4: 0x0660
 EAX: 0x08014cb4, EBX: 0x08014cb4, ECX: 0x09af4304, EDX: 0x
 CR2: 0x0018, EBP: 0x56b9be68, ESI: 0x09af4304, EDI: 0x0728c7e0
 EFL: 0x00010297, EIP: 0x0046d801, CS:  0x0008, DS:  0x072d0010
 Error code: 0x

 Backtrace (CPU 1), Frame : Return Address (4 potential args on stack)
 0x56b9bc88 : 0x21b455 (0x5cf328 0x56b9bcbc 0x2238b1 0x0)
 0x56b9bcd8 : 0x2a8ab2 (0x591664 0x46d801 0xe 0x59182e)
 0x56b9bdb8 : 0x29e9a8 (0x56b9bdd0 0x90853d4 0x56b9be68 0x46d801)
 0x56b9bdc8 : 0x46d801 (0xe 0x48 0x56b90010 0x72d0010)
 0x56b9be68 : 0x49cf8c (0x8014cb4 0x9af4304 0x56b9bebc 0x0)
 0x56b9be88 : 0x46ec97 (0x9af4304 0x0 0x1da230 0x18)
 0x56b9bed8 : 0x4933de (0x72da9e0 0x12baec80 0x0 0x0)
 0x56b9bf78 : 0x4ed78d (0x9a157e0 0x87ae7c0 0x87ae804 0x0)
 0x56b9bfc8 : 0x29eef8 (0x8957060 0x0 0x10 0x8954fc0)

 BSD process name corresponding to current thread: ghc-stage2

 Mac OS version:
 10F569

 Kernel version:
 Darwin Kernel Version 10.4.0: Fri Apr 23 18:28:53 PDT 2010;
 root:xnu-1504.7.4~1/RELEASE_I386
 System model name: MacBookPro7,1 (Mac-F222BEC8)

 System uptime in nanoseconds: 429843412257
 unloaded kexts:
 com.parallels.kext.prl_usb_connect      6.0 11828.615184 (addr 0x5807e000,
 size 0x24576) - last unloaded 317902627354
 loaded kexts:
 com.pgp.kext.PGPnke     1.1.0 - last loaded 64160566115
 com.sophos.kext.sav     7.2.0
 com.pgp.iokit.PGPwdeDriver      1.2.0d1
 com.apple.driver.AppleHWSensor  1.9.3d0
 com.apple.filesystems.autofs    2.1.0
 com.apple.driver.AGPM   100.12.12
 com.apple.driver.AudioAUUC      1.4
 com.apple.Dont_Steal_Mac_OS_X   7.0.0
 com.apple.iokit.CHUDUtils       364
 com.apple.driver.AppleMikeyHIDDriver    1.2.0
 com.apple.iokit.CHUDProf        364
 com.apple.driver.AppleMikeyDriver       1.8.7f1
 com.apple.driver.AppleIntelPenrynProfile        17.1
 com.apple.driver.AppleHDA       1.8.7f1
 com.apple.driver.AppleUpstreamUserClient        3.3.2
 com.apple.driver.AudioIPCDriver 1.1.2
 com.apple.driver.AppleLPC       1.4.12
 com.apple.driver.AppleBacklight 170.0.24
 com.apple.driver.SMCMotionSensor        3.0.0d4
 com.apple.iokit.AppleBCM5701Ethernet    2.3.9b3
 com.apple.driver.ACPI_SMC_PlatformPlugin        4.1.2b1
 com.apple.GeForce       6.1.8
 com.apple.driver.AirPortBrcm43224       425.16.2
 com.apple.kext.AppleSMCLMU      1.5.0d3
 com.apple.driver.AppleUSBTCButtons      1.8.1b1
 com.apple.driver.AppleIRController      303.8
 com.apple.driver.AppleUSBTCKeyboard     1.8.1b1
 com.apple.driver.AppleUSBCardReader     2.5.4
 com.apple.iokit.SCSITaskUserClient      2.6.5
 com.apple.iokit.IOAHCIBlockStorage      1.6.2
 com.apple.driver.AppleUSBHub    4.0.0
 com.apple.driver.AppleUSBEHCI   4.0.2
 com.apple.BootCache     31
 com.apple.AppleFSCompression.AppleFSCompressionTypeZlib 1.0.0d1
 com.apple.driver.AppleUSBOHCI   3.9.6
 com.apple.driver.AppleFWOHCI    4.7.1
 com.apple.driver.AppleAHCIPort  2.1.2
 com.apple.driver.AppleEFINVRAM  1.3.0
 com.apple.driver.AppleRTC       1.3.1
 com.apple.driver.AppleHPET      1.5
 com.apple.driver.AppleSmartBatteryManager       160.0.0
 com.apple.driver.AppleACPIButtons       1.3.2
 com.apple.driver.AppleSMBIOS    1.6
 com.apple.driver.AppleACPIEC    1.3.2
 com.apple.driver.AppleAPIC      1.4
 com.apple.driver.AppleIntelCPUPowerManagementClient     105.10.0
 

Re: [Haskell-cafe] [Haskell] ANNOUNCE: jhc 0.7.7 is out.

2011-01-30 Thread John Meacham
On Sun, Jan 30, 2011 at 4:44 AM, Roman Cheplyaka r...@ro-che.info wrote:
 A few questions about the inclusion of parsec:

 1. It is parsec-2, not parsec-3, right?

Yes, it is parsec-2. 2.1.0.1 to be exact.

 2. Does this change consist of merely inclusion parsec as a standard
   library, or are there any compiler improvements in this release that
   made it possible to build parsec?

There have been a lot of bug fixes and improvements in this version, I can't
remember if any were particularly enabling for parsec or not. I wouldn't be
surprised if it didn't work well before this or the last release.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANNOUNCE: jhc 0.7.7 is out.

2011-01-29 Thread John Meacham
Announcing jhc 0.7.7! This release fixes a large number of bugs that cropped up
when compiling haskell out in the wild as well as adds some more features. A
major one being the garbage collector is enabled by default.

 http://repetae.net/computer/jhc/

Changes: (including some changes from the unannounced 0.7.6 release)

* The Garbage Collector is now enabled by default.
* new standard libraries
    * transformers
    * parsec
    * QuickCheck
* report proper errors with line numbers for various issues with compiled code.
* New option '-C' that compiles to C code and stops, useful for targeting other
  platforms or building shared libraries.
* Nintendo Wii added as target (thanks to Korcan Hussein)
* Fix major performance bug that kept WRAPPERs from being inlined in
certain places.
* Typechecking speed greatly increased.
* monomorphism-restriction flag is now respected
* empty class contexts now work
* unicode in haskell source supported now
* Type Defaulting now works properly
* RULES parse like ghc now for compatibility
* 'do' 'where' on same indent now parses
* Build system fixes and cleanups
* irrefutable lambda pattern bindings desugar properly now.
* GHC parsing regression tests have been ported to jhc, helped find
and fix many bugs.
* Certain optimizations would discard RULES, these have been fixed.
* Removed quadratic behavior in optimizer, speeds things up noticibly.
* Garbage collector improvements, caches are pre-initialized.
* Fix shiftL/R implementations for built in types.
* All Num, Real, and Integral magic removed from compiler. This is a very
  good thing.
* improved help messages

       John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Splittable random numbers

2011-01-21 Thread John Meacham
On Wed, Nov 10, 2010 at 11:33 AM, Lauri Alanko l...@iki.fi wrote:
 So a naive implementation of split would be:

 split g = (mkGen seed, g')
  where (seed, g') = random g

 (Where mkGen creates a new state from some sufficiently big seed
 data.)

 So what is the problem here? What kinds of observable
 interdependencies between split streams would come up with the above
 definition using common PRNGs? Are my assumptions about the security
 of cryptographic PRNGs incorrect, or is the issue simply that they are
 too expensive for ordinary random number generation?

Yeah, I was thinking for any good PRNG this should be fine. We
probably want to pull as much internal state as we can from one
generator to the other so we may want to use a specialized seed
routine that is optimized for a specific PRNG rather than using an Int
or something.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: backward compatibility

2011-01-19 Thread John Meacham
On Wed, Jan 19, 2011 at 6:32 PM, Kazu Yamamoto k...@iij.ad.jp wrote:
 Hello,

 I have been using GHC HEAD for some months and am suffering from the
 breaks of backward compatibility.

 1) MANY packages cannot be complied with GHC HEAD because of lack of
 FlexibleInstances and BangPatterns.

 2) The network package on github cannot be compiled because the layout
 handling of GHC HEAD became more severe. For instance, here is such
 code from Network/Socket.hsc.

    allocaBytes (2 * sizeOf (1 :: CInt)) $ \ fdArr - do
    _ - throwSocketErrorIfMinus1Retry socketpair $
                c_socketpair (packFamily family)
                             (packSocketType stype)
                             protocol fdArr

Allowing this was a specific feature that was included in ghc on
purpose (as well as the relaxed if/then layout rule in do statements)
So this is definitely a regression.

 John

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Haskell for Gingerbread

2011-01-13 Thread John Meacham
On Thu, Jan 13, 2011 at 3:07 AM, Stefan Kersten s...@k-hornz.de wrote:
 On 28.12.10 21:25, John Meacham wrote:
 jhc generated C works on the android/ARM just fine. Android specific
 libraries arn't available, so you would have to bind to what you want
 with the FFI.

 is there a recommended procedure for porting code from GHC to JHC? i'd like to
 port an application of mine to ios/android, but my naive approach of 
 specifying
 the jhc compiler in ~/.cabal/config fails with the first dependency (binary),
 because the base library does not have the required version. any hints?

In general cabal and jhc don't work together. However, many libraries
from hackage do just work when compiled with jhc manually so it
isn't as big of an issue in practice as one might think but the
porting isn't automated. If your code doesn't have any ghc specific
code in it then just calling jhc directly on Main and specifying the
packages you depend on with '-p' should do the right thing. If there
are specific libraries you need that arn't included in the jhc
distribution and should be, then ask on the jhc list and I'll see if I
can get them in the next release. A more automated way to do this
probably will appear at some point.

There were some issues found recently by the iPhone target that
probably apply to android too, see the following thread for how to
resolve them.

http://www.haskell.org/pipermail/jhc/2011-January/000858.html

 John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell for Gingerbread

2010-12-28 Thread John Meacham
jhc generated C works on the android/ARM just fine. Android specific
libraries arn't available, so you would have to bind to what you want
with the FFI.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Identity type

2010-12-22 Thread John Meacham
On Tue, Dec 14, 2010 at 10:31 AM, Pierre-Etienne Meunier
pierreetienne.meun...@gmail.com wrote:
 Is there something like an identity type, transparent to the type-checker, in 
 haskell ?
 For instance, I'm defining an interval arithmetic, with polynomials, 
 matrices, and all that... defined with intervals. The types are :

No, such a thing doesn't exist. In fact, it would make the type system
undecidable if it did exist. I only know this because a long while ago
I really wanted such a thing to exist then tried to work out the
consequences and realized it would break the type system. I have found
liberal use of 'newtype-deriving' has mitigated my need for it in the
specific cases I was interested in.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Typeable TypeRep Ord instance.

2010-12-22 Thread John Meacham
On Sat, Dec 4, 2010 at 2:08 PM, Serguey Zefirov sergu...@gmail.com wrote:
 Why TypeRep does have equality and doesn't have ordering?

 It would be good to have that.

Yes, I have wanted that too. It would make maps from types to values
possible/efficient. There is a very critical path in jhc that use
type-indexed data structures that I have to implement a very hacky
workaround for no Ord instance for TypeRep

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type families status

2010-12-13 Thread John Meacham
FWIW, I am forgoing functional dependencies and going straight to type
families/associated types in jhc. They are easier to implement and
much cleaner IMHO.

John

On Mon, Dec 13, 2010 at 12:29 AM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 Yes, I think type families are here to stay.

 There is no formal policy about GHC extensions.  Generally speaking, I regard 
 GHC as a laboratory in which to test ideas, which militates in favour of 
 putting things in so that people can try them.  Once in they are hard to take 
 out again (linear implicit parameters is a rare exception) because some come 
 to rely on them.

 If there's anything in particular you need, ask.  The main thing that is 
 scheduled for an overhaul is the derivable type class mechanism, for which 
 Pedro is working on a replacemement.

 Simon

 | -Original Message-
 | From: glasgow-haskell-users-boun...@haskell.org [mailto:glasgow-haskell-
 | users-boun...@haskell.org] On Behalf Of Permjacov Evgeniy
 | Sent: 10 December 2010 19:42
 | To: glasgow-haskell-users@haskell.org
 | Subject: Type families status
 |
 | Is it safe to consider type families and associated type families
 | extensions for ghc as stable ? Wich related extensions (flexible
 | contexts, undecidable instanses and so on) may be deprecated or changed
 | in near (2-3 years) future and wich may not?
 |
 | ___
 | Glasgow-haskell-users mailing list
 | Glasgow-haskell-users@haskell.org
 | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  1   2   3   4   5   6   7   8   9   10   >