ghc-7.8.1 RC2 ppc dyn linked executable segfaults

2014-03-18 Thread Jens Petersen

 http://www.haskell.org/ghc/dist/7.8.1-rc2/



 A test build on ppc completed but then the dyn executable sanity check
 failed (not yet sure why):
 http://ppc.koji.fedoraproject.org/koji/taskinfo?taskID=1707922


The ppc64 build looks okay, but ppc seems to have a problem with
executables dyn linked to Haskell libs segfaulting.

Is anyone able to reproduce this on Linux/ppc (32bit)?  Debian?

I don't have access yet to a ppc box to investigate further but I see the
segfault for ghc -dynamic Hello.hs; ./Hello on ppc in the fedora buildsys.

http://ppc.koji.fedoraproject.org/kojifiles/work/tasks/1459/1711459/build.logoffset=-4000(ghc
-v -dynamic Foo.hs) [1]

I guess I should file a bug anyway.  I sent a heads-up mail to Gustavo too.

Jens

[1] full log is
http://ppc.koji.fedoraproject.org/kojifiles/work/tasks/1459/1711459/build.log[4.9MB]
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


HEADS-UP: new server-side validation git hook for submodule updates call-for-help

2014-03-18 Thread Herbert Valerio Riedel
Hello *,

I've put in place a new server-side validation hook a few days ago, and
since nobody seemed to have complained yet, I assume it didn't have any
adverse effects so far :-)

It will only be triggered when Git submodule references are touched by a
commit; you can find some preliminary (but incomplete) documentation and
a sample session triggering validation-failure on purpose at

  https://ghc.haskell.org/trac/ghc/ticket/8251#comment:4

(this will be turned into a proper wiki-page once #8251 is completed;
there's some minor details wrt some corner cases that still need to be
looked at)

So, this mostly addresses the server-side requirements for migrating to
a proper git-submodule set-up for ghc.git;

The next steps, however, include taking care of the client-side work-flow
for working with a fully submoduled ghc.git setup. Personally, I'm
quite comfortable using direct git commands to manage such a construct,
but I'm well aware not everyone is (as previous discussions here have
shown). Also, as my time is rather limited, I'd like to ask interested
parties to join in and help formulate the future client-side work-flow[1]
and/or update (or rewrite) the 'sync-all' to provide a seamless or at
least smooth transition for those GHC devs who want to keep using
sync-all instead of using direct Git commands.


 [1]: There's some difference in how tracked upstream packages and
  GHC-HQ owned sub-repos are to be handled workflow-wise, to avoid
  ending up with a noisy ghc.git history. 

  For instance, having ghc.git with submodules is not the same as
  having a huge monolithic ghc.git repository with all subrepos
  embedded. specifically, it might not be sensible to propagate
  *every* single subrepo-commit as a separate ghc.git submod-ref
  update, but rather in logical batches (N.B.: using submodules
  gives the additional ability to git bisect within subrepos instead
  of having to bisect always only at top-level). This is one example
  of things to discuss/consider when designing the new work-flow.

Cheers,
  hvr
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Yuras Shumovich
On Tue, 2014-03-18 at 12:37 +1100, Manuel M T Chakravarty wrote:
  
  Library implementation can't generate native dynamic wrapper, it has to
  use slow libffi.
 
 When we first implemented the FFI, there was no libffi. Maintaining the 
 adjustor code for all platforms is a PITA; hence, using libffi was a welcome 
 way to improve portability.

Do you think we can remove native adjustors? I can prepare a patch.

It requires minor changes to cache ffi_cif structure. On desugar phase
for each wrapper we can generate fresh global variable to store cif
pointer and pass it to createAdjustor.

 
  
  From my point of view, at this point it is more important to agree on
  the next question: do we want such functionality in ghc at all? I don't
  want to waste time on it if nobody wants to see it merged.
 
 I still don’t see the benefit in further complicating an already murky corner 
 of the compiler. Moreover, for this to make sense, it would need to work on 
 all supported platforms. Unless you are volunteering to implement it on 
 multiple platforms, this would mean, we’d use libffi for most platforms 
 anyway. This brings me to my original point, a library or tool is the better 
 place for this.

OK, I don't buy it, but I see your point.

 
 Manuel
 
 PS: I’d happily accept language-c-inline patches for marshalling structs.
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Simon Marlow

On 18/03/2014 01:37, Manuel M T Chakravarty wrote:

Yuras Shumovich shumovi...@gmail.com:

I think the compiler is the right place. It is impossible to have
efficient implementation in a library.

For dynamic wrapper (foreign import wrapper stuff) ghc generates piece
of executable code at runtime. There are native implementations for a
number of platforms, and libffi is used as a fall back for other
platforms (see rts/Adjustor.c). AFAIK it is done that way because libffi
is slower then native implementation.

Library implementation can't generate native dynamic wrapper, it has to
use slow libffi.


When we first implemented the FFI, there was no libffi. Maintaining the 
adjustor code for all platforms is a PITA; hence, using libffi was a welcome 
way to improve portability.

Making the adjustor code more complicated by adding more functionality doesn’t 
sound like a good plan to me.

Besides, there are other overheads in addition to the actual marshalling in FFI 
calls and most of the time we are calling out to library functions for which 
the FFI call overhead is only a small portion of the runtime.



i think the crux of Manuel's point is mainly that any good proposal
has to
at least give a roadmap to support on all the various platforms etc
etc


I don't think you are expecting detailed schedule from me. Passing
structure by value is possible on all platforms ghc supports, and it can
be implemented for any particular platform if somebody is interested.

 From my point of view, at this point it is more important to agree on
the next question: do we want such functionality in ghc at all? I don't
want to waste time on it if nobody wants to see it merged.


I'm really keen to have support for returning structs in particular. 
Passing structs less so, because working around the lack of struct 
passing isn't nearly as onerous as working around the lack of struct 
returns.  Returning multiple values from a C function is a real pain 
without struct returns: you have to either allocate some memory in 
Haskell or in C, and both methods are needlessly complex and slow. 
(though allocating in Haskell is usually better.) C++ code does this all 
the time, so if you're wrapping C++ code for calling from Haskell, the 
lack of multiple returns bites a lot.


In fact implementing this is on my todo list, I'm really glad to see 
someone else is planning to do it :-)


The vague plan I had in my head was to allow the return value of a 
foreign import to be a tuple containing marshallable types, which would 
map to the appropriate return convention for a struct on the current 
platform.  Perhaps allowing it to be an arbitrary single-constructor 
type is better, because it allows us to use a type that has a Storable 
instance.


Cheers,
Simon



I still don’t see the benefit in further complicating an already murky corner 
of the compiler. Moreover, for this to make sense, it would need to work on all 
supported platforms. Unless you are volunteering to implement it on multiple 
platforms, this would mean, we’d use libffi for most platforms anyway. This 
brings me to my original point, a library or tool is the better place for this.

Manuel

PS: I’d happily accept language-c-inline patches for marshalling structs.


On Sat, 2014-03-15 at 00:37 -0400, Carter Schonwald wrote:

I'm not opposing that, in fact, theres a GHC ticket discussing some stuff
related to this (related to complex numbers).

i think the crux of Manuel's point is mainly that any good proposal has to
at least give a roadmap to support on all the various platforms etc etc


On Sat, Mar 15, 2014 at 12:33 AM, Edward Kmett ekm...@gmail.com wrote:


I don't care enough to fight and try to win the battle, but I just want to
point out that Storable structs are far more brittle and platform dependent
than borrowing the already correct platform logic for struct passing from
libffi.

I do think the existing FFI extension made the right call under the 32 bit
ABIs that were in use at the time it was defined. That said, with 64-bit
ABIs saying that 2 32-bit ints should be passed in a single 64 bit
register, you wind up with large chunks of third party APIs we just can't
call out to directly any more, requiring many one-off manual C shims.

-Edward




On Sat, Mar 15, 2014 at 12:17 AM, Carter Schonwald 
carter.schonw...@gmail.com wrote:


indeed, its very very easy to do storable instances that correspond to
the struct type you want,

the ``with`` function in
http://hackage.haskell.org/package/base-4.6.0.1/docs/Foreign-Marshal-Utils.htmlforeigh.marshal.utils
 actually gets you most of the way there!




On Sat, Mar 15, 2014 at 12:00 AM, Manuel M T Chakravarty 
c...@cse.unsw.edu.au wrote:


Yuras,

I’m not convinced that the compiler is the right place for this kind of
functionality. In fact, when we designed the Haskell FFI, we explicit
decided against what you propose. There are a few reasons for this.

Firstly, compilers are complex beasts, and secondly, it takes a 

RE: HEADS-UP: new server-side validation git hook for submodule updates call-for-help

2014-03-18 Thread Simon Peyton Jones
Herbert 

I really appreciate the work you are doing here -- thank you.

As a client, though, I'm very ignorant about submodules, so I do need education 
about the work-flows that I should follow.  If there are things I must or must 
not do, I need telling about them.

Much is taken care of by sync-all, which is great.  If that continues to be the 
case, I'm happy!

Simon

| -Original Message-
| From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
| Herbert Valerio Riedel
| Sent: 18 March 2014 10:59
| To: ghc-devs
| Subject: HEADS-UP: new server-side validation git hook for submodule
| updates  call-for-help
| 
| Hello *,
| 
| I've put in place a new server-side validation hook a few days ago, and
| since nobody seemed to have complained yet, I assume it didn't have any
| adverse effects so far :-)
| 
| It will only be triggered when Git submodule references are touched by a
| commit; you can find some preliminary (but incomplete) documentation and
| a sample session triggering validation-failure on purpose at
| 
|   https://ghc.haskell.org/trac/ghc/ticket/8251#comment:4
| 
| (this will be turned into a proper wiki-page once #8251 is completed;
| there's some minor details wrt some corner cases that still need to be
| looked at)
| 
| So, this mostly addresses the server-side requirements for migrating to
| a proper git-submodule set-up for ghc.git;
| 
| The next steps, however, include taking care of the client-side work-
| flow for working with a fully submoduled ghc.git setup. Personally,
| I'm quite comfortable using direct git commands to manage such a
| construct, but I'm well aware not everyone is (as previous discussions
| here have shown). Also, as my time is rather limited, I'd like to ask
| interested parties to join in and help formulate the future client-side
| work-flow[1] and/or update (or rewrite) the 'sync-all' to provide a
| seamless or at least smooth transition for those GHC devs who want to
| keep using sync-all instead of using direct Git commands.
| 
| 
|  [1]: There's some difference in how tracked upstream packages and
|   GHC-HQ owned sub-repos are to be handled workflow-wise, to avoid
|   ending up with a noisy ghc.git history.
| 
|   For instance, having ghc.git with submodules is not the same as
|   having a huge monolithic ghc.git repository with all subrepos
|   embedded. specifically, it might not be sensible to propagate
|   *every* single subrepo-commit as a separate ghc.git submod-ref
|   update, but rather in logical batches (N.B.: using submodules
|   gives the additional ability to git bisect within subrepos instead
|   of having to bisect always only at top-level). This is one example
|   of things to discuss/consider when designing the new work-flow.
| 
| Cheers,
|   hvr
| ___
| ghc-devs mailing list
| ghc-devs@haskell.org
| http://www.haskell.org/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Yuras Shumovich
Hi,

I thought I have lost the battle :)
Thank you for the support, Simon!

I'm interested in full featured solution: arguments, return value,
foreign import, foreign export, etc. But it is too much for me to do it
all at once. So I started with dynamic wrapper.

The plan is to support structs as arguments and return value for dynamic
wrapper using libffi;
then implement native adjustors at least for x86_64 linux;
then make final design decision (tuple or data? language pragma? union
support? etc);
and only then start working on foreign import. 

But I'm open for suggestions. Just let me know if you think it is better
to start with return value support for foreign import. 

Thanks,
Yuras

On Tue, 2014-03-18 at 12:19 +, Simon Marlow wrote:
 I'm really keen to have support for returning structs in particular. 
 Passing structs less so, because working around the lack of struct 
 passing isn't nearly as onerous as working around the lack of struct 
 returns.  Returning multiple values from a C function is a real pain 
 without struct returns: you have to either allocate some memory in 
 Haskell or in C, and both methods are needlessly complex and slow. 
 (though allocating in Haskell is usually better.) C++ code does this all 
 the time, so if you're wrapping C++ code for calling from Haskell, the 
 lack of multiple returns bites a lot.
 
 In fact implementing this is on my todo list, I'm really glad to see 
 someone else is planning to do it :-)
 
 The vague plan I had in my head was to allow the return value of a 
 foreign import to be a tuple containing marshallable types, which would 
 map to the appropriate return convention for a struct on the current 
 platform.  Perhaps allowing it to be an arbitrary single-constructor 
 type is better, because it allows us to use a type that has a Storable 
 instance.
 
 Cheers,
 Simon
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Pretty printing

2014-03-18 Thread Simon Peyton Jones
Gergo
I'm a bit out of date... did you update those test results etc to get to clean 
validate?
S
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS-UP: new server-side validation git hook for submodule updates call-for-help

2014-03-18 Thread Johan Tibell
Lets give some example workflows for working with submodules. Here's what I
think a raw (i.e. no sync-all) update to base will look like. Please
correct me if I'm wrong.

# Step 1:
cd ~/src/ghc/libraries/base
# edit some_file
git add some_file
git commit -m Commit to base repo
git push  # push update to base to git.haskell.org

# Step 2
cd ~/src/ghc
git add libraries/base
git commit -m Have GHC use the new base version
git push  # push update to ghc to git.haskell.org

Failure modes include:

 * Forgetting step 2: the ghc repo will point to a slightly older base next
time someone checks it out. Fixing things when in this state: just perform
step 2.
 * Forgetting `git push` in step 1. the ghc repo will point to a base
commit that doesn't exist (except on some developers machine).  Fixing
things when in this state: the developer who forgot to `git push` in step 1
needs to do that.

How could sync-all help us:

 * sync-all push could push all repos, preventing failure case 2 above.

The second interesting workflow involving pulling new changes. This is what
the raw (i.e. no sync-all) workflow will look like:

cd ~/src/ghc
git pull
git submodule update

Failure modes include:

 * Forgetting the `submodule update` and then doing e.g. `git commit -am
some compile commit`, reverting the pointer to e.g. base to whatever
older version the developer was using. No commits are lost (nothing changes
in the base repo), but the ghc repo will point to an older commit.

How could sync-all help us:

 * sync-all pull could always run `submodule update`.

The server-side check that Herbert added will make sure that the failure
mode cannot happen, as you explicitly have to say in the commit message
that you updated a submodule.

I think if base was folded into ghc.git very few people would have to deal
with submodules.

On Tue, Mar 18, 2014 at 11:58 AM, Herbert Valerio Riedel h...@gnu.orgwrote:

 Hello *,

 I've put in place a new server-side validation hook a few days ago, and
 since nobody seemed to have complained yet, I assume it didn't have any
 adverse effects so far :-)

 It will only be triggered when Git submodule references are touched by a
 commit; you can find some preliminary (but incomplete) documentation and
 a sample session triggering validation-failure on purpose at

   https://ghc.haskell.org/trac/ghc/ticket/8251#comment:4

 (this will be turned into a proper wiki-page once #8251 is completed;
 there's some minor details wrt some corner cases that still need to be
 looked at)

 So, this mostly addresses the server-side requirements for migrating to
 a proper git-submodule set-up for ghc.git;

 The next steps, however, include taking care of the client-side work-flow
 for working with a fully submoduled ghc.git setup. Personally, I'm
 quite comfortable using direct git commands to manage such a construct,
 but I'm well aware not everyone is (as previous discussions here have
 shown). Also, as my time is rather limited, I'd like to ask interested
 parties to join in and help formulate the future client-side work-flow[1]
 and/or update (or rewrite) the 'sync-all' to provide a seamless or at
 least smooth transition for those GHC devs who want to keep using
 sync-all instead of using direct Git commands.


  [1]: There's some difference in how tracked upstream packages and
   GHC-HQ owned sub-repos are to be handled workflow-wise, to avoid
   ending up with a noisy ghc.git history.

   For instance, having ghc.git with submodules is not the same as
   having a huge monolithic ghc.git repository with all subrepos
   embedded. specifically, it might not be sensible to propagate
   *every* single subrepo-commit as a separate ghc.git submod-ref
   update, but rather in logical batches (N.B.: using submodules
   gives the additional ability to git bisect within subrepos instead
   of having to bisect always only at top-level). This is one example
   of things to discuss/consider when designing the new work-flow.

 Cheers,
   hvr
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Haddock strings in .hi files

2014-03-18 Thread Mateusz Kowalczyk
Hi all,

I saw https://ghc.haskell.org/trac/ghc/ticket/5467 pop up in my inbox
and it reminded me of something I've been wondering for a while: why do
we not store Haddock docstrings in the interface file?

I think that if we did, we could do some great things:

1. Show docs in GHCi (I vaguely recall someone working on this ~1 year
ago, does anyone have any info?)

2. Allow Haddock to work a lot faster: the big majority of time spent
when creating documentation is actually spent by Haddock calling various
GHC functions, such as type-checking the modules. Only a small amount of
time is actually spent by Haddock on other tasks such as parsing or
outputting the documentation. If we could simply get everything we need
from the .hi files, we save ourselves a lot of time.

3. Allow Haddock to create partial documentation: a complaint I
sometimes hear is if anything at all in the project doesn't type check,
we don't get any documentation at all. I think that it'd be viable to
generate only the documentation for the modules/functions that do
type-check and perhaps skip type signatures for everything else.

Points 1. and 2. are of clear benefit. Point 3. is a simple afterthought
and thinking about it some more, I think that maybe it'd be possible to
do this with what we have right now: is type-checking separate parts of
the module supported? Can we retrieve documentation for the parts that
don't type-check?

I am asking for input on what people think. I am not familiar at all
with what goes into the .hi file (and I can't find anything concrete! Am
I missing some wiki page?) at all and why. At the very least, 1. should
be easy to implement.

It was suggested that I submit a proposal for this as part of GSoC,
namely implementing 1. and 2.. I admit that having much faster
documentation builds would be amazing and Edward K. and Carter S. seem
to think that this is very do-able in the 3 month period that GSoC runs
over.

While I say all this, I have already submitted my proposal on a
different topic. I am considering writing this up and submitting this as
well but I am looking for some insight into the problem first.

If there are any students around still looking for ideas, please do
speak up if you want to snatch this. If there are people that are eager
to mentor something like this then I suppose they should speak up too.

Thanks!

-- 
Mateusz K.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Simon Marlow

So the hard parts are:

 - the native code generators
 - native adjustor support (rts/Adjustor.c)

Everything else is relatively striaghtforward: we use libffi for 
adjustors on some platforms and for GHCi, and the LLVM backend should be 
quite easy too.


I would at least take a look at the hard bits and see whether you think 
it's going to be possible to extend these to handle struct args/returns. 
 Because if not, then the idea is a dead end.  Or maybe we will need to 
limit the scope to make things easier (e.g. only integer and pointer 
fields).


Cheers,
Simon

On 18/03/2014 17:31, Yuras Shumovich wrote:

Hi,

I thought I have lost the battle :)
Thank you for the support, Simon!

I'm interested in full featured solution: arguments, return value,
foreign import, foreign export, etc. But it is too much for me to do it
all at once. So I started with dynamic wrapper.

The plan is to support structs as arguments and return value for dynamic
wrapper using libffi;
then implement native adjustors at least for x86_64 linux;
then make final design decision (tuple or data? language pragma? union
support? etc);
and only then start working on foreign import.

But I'm open for suggestions. Just let me know if you think it is better
to start with return value support for foreign import.

Thanks,
Yuras

On Tue, 2014-03-18 at 12:19 +, Simon Marlow wrote:

I'm really keen to have support for returning structs in particular.
Passing structs less so, because working around the lack of struct
passing isn't nearly as onerous as working around the lack of struct
returns.  Returning multiple values from a C function is a real pain
without struct returns: you have to either allocate some memory in
Haskell or in C, and both methods are needlessly complex and slow.
(though allocating in Haskell is usually better.) C++ code does this all
the time, so if you're wrapping C++ code for calling from Haskell, the
lack of multiple returns bites a lot.

In fact implementing this is on my todo list, I'm really glad to see
someone else is planning to do it :-)

The vague plan I had in my head was to allow the return value of a
foreign import to be a tuple containing marshallable types, which would
map to the appropriate return convention for a struct on the current
platform.  Perhaps allowing it to be an arbitrary single-constructor
type is better, because it allows us to use a type that has a Storable
instance.

Cheers,
Simon





___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: HEADS-UP: new server-side validation git hook for submodule updates call-for-help

2014-03-18 Thread Herbert Valerio Riedel
Hello Johan, 

On 2014-03-18 at 19:17:55 +0100, Johan Tibell wrote:
 Lets give some example workflows for working with submodules. Here's what I
 think a raw (i.e. no sync-all) update to base will look like. Please
 correct me if I'm wrong.

 # Step 1:
 cd ~/src/ghc/libraries/base
 # edit some_file
 git add some_file
 git commit -m Commit to base repo
 git push  # push update to base to git.haskell.org

'git push' w/o a refspec will only work, if the HEAD isn't detached

you'd rather have to invoke something like 'git push origin
HEAD:ghc-head'[1]  (or have a tracked branch checked out)

 # Step 2
 cd ~/src/ghc
 git add libraries/base
 git commit -m Have GHC use the new base version
 git push  # push update to ghc to git.haskell.org

 Failure modes include:

  * Forgetting step 2: the ghc repo will point to a slightly older base next
 time someone checks it out. Fixing things when in this state: just perform
 step 2.

that's brings up an interesting question (that was also mentioned on
#ghc already):

Are there cases when it is desirable to point to an older commit on
purpose? 

(one use-case may be, if you want to rollback ghc.git to some older
commit to unbreak the build w/o touching the submodule repo itself)

(somewhat related feature: git submodule update --remote)

  * Forgetting `git push` in step 1. the ghc repo will point to a base
 commit that doesn't exist (except on some developers machine).  Fixing
 things when in this state: the developer who forgot to `git push` in step 1
 needs to do that.

Actually, the new server-side hook will reject (for non-wip/ branches at
least) a ghc.git commit which would result in a submod-ref pointing to a
non-existing commit, so this one's covered already.

 How could sync-all help us:

  * sync-all push could push all repos, preventing failure case 2
  above.

(as I wrote, this can't happen thanks to the new hook script)

However, see man-page for git push --recurse-submodules

 The second interesting workflow involving pulling new changes. This is what
 the raw (i.e. no sync-all) workflow will look like:

 cd ~/src/ghc
 git pull
 git submodule update

 Failure modes include:

  * Forgetting the `submodule update` and then doing e.g. `git commit -am
 some compile commit`, reverting the pointer to e.g. base to whatever
 older version the developer was using. No commits are lost (nothing changes
 in the base repo), but the ghc repo will point to an older commit.

 How could sync-all help us:

  * sync-all pull could always run `submodule update`.

 The server-side check that Herbert added will make sure that the failure
 mode cannot happen, as you explicitly have to say in the commit message
 that you updated a submodule.

 I think if base was folded into ghc.git very few people would have to deal
 with submodules.

if 'base' remains tightly coupled to ghc internals, that might be indeed
be the easiest solution; I'm just not sure how the big base-split will
be affected by folded-into-ghc base. Also, supporting a sensible 'cabal
get -s base' will require a bit more work (or we'd have to remove the
ability for that again -- not that it is of much use anyway)

PS: I'm wondering if the next-gen 'sync-all' couldn't be simply realised
by defining a set of git aliases[2]; e.g. it's rather commond to have a
'git pullall' alias defined for combining the effect of 'git pull' and
'git submodule update' into one alias[3]


Cheers,
  hvr

 [1]: occurences of 'ghc-head' will most likely be renamed to 'master'
  as that's more consistent with GHC HEAD being 'master' in ghc.git
  as well

 [2]: https://git.wiki.kernel.org/index.php/Aliases

 [3]: git config alias.pullall '!git pull  git submodule update --init 
--recursive'

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Pretty printing

2014-03-18 Thread Dr . ÉRDI Gergő
Hi,

I'm getting it done later today.

Bye,
Gergo
On Mar 19, 2014 1:36 AM, Simon Peyton Jones simo...@microsoft.com wrote:

  Gergo

 I'm a bit out of date... did you update those test results etc to get to
 clean validate?

 S

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: We need to add role annotations for 7.8

2014-03-18 Thread David Terei
Adding in GHC-DEV. Yes, sorry for no contact, my GHC email filter was
misconfigured so only got alerted when SPJ emailed me directly a few
days back.


On 18 March 2014 17:36, David Mazieres expires 2014-06-16 PDT
mazieres-i58umkudjfnfpfdx6jbbn3y...@temporary-address.scs.stanford.edu
wrote:
 At Fri, 14 Mar 2014 18:45:16 +0100,
 Mikhail Glushenkov wrote:

 Hi Richard,

  The real trouble with making this decision is that we have no real
  guidance. We've tried contacting David Terei (the originator of
  Safe Haskell) several times to no avail. If you are an actual
  consumer of Safe Haskell and would like to share your opinion on
  this front, I do encourage you to make a ticket, essentially
  requesting a resurrection of the extra Safe checks.

 Yes, it would be nice if David Terei or David Mazières (CC:ed) could comment.

 Sadly, it appears some mail may not have made it through to David
 Terei's mailbox.

 At any rate, David and I just discussed the new Coerce typeclass.
 Based on David's understanding of its behavior, it sounds pretty
 dangerous for Safe Haskell.  At a minimum, the programmer is going to
 need to understand a lot more than Haskell 2010 to write secure code.

 Based on my possibly limited understanding of the new
 feature--automatically generating instances of the Coerce type seems
 very un-Haskell-like.  By analogy, we could automatically generate
 instance of Read and Show (or add equivalent DebugRead/DebugShow
 classes) wherever possible, but this would similarly break abstraction
 by providing effective access to non-exported constructors.

 I understand why there is a need for something better than
 GeneralizedNewtypeDeriving.  However, implementing Coerce as a
 typeclass has the very serious disadvantage that there is no Haskell
 mechanism for controlling instance exports.  And if we are going to
 add a new mechanism (roles) to control such exports, exporting an
 instance that is never requested and that undermines modularity and
 abstraction is an unfortunate default.

 It may be too late for this, but a cleaner solution more in keeping
 with other extensions would be to have a -XDeriveCoerce extension that
 allows Coerce to be explicitly derived when safe.  This could be
 combined with leaving the previous behavior of
 GeneralizedNewtypeDeriving and just deprecating the language feature.

 Though controlling instance exports does not have a precedent, another
 option might be to special-case the Coerce class and only export
 instances of Coerce when all constructors of a type are also exported.
 This would prevent anyone from using Coerce to do things they couldn't
 already do manually.

 David

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs