Re: GHCJS

2011-08-05 Thread Simon Marlow

On 04/08/2011 21:02, Simon Peyton-Jones wrote:

|  data LiteralDesugaring m =
|LiteralDesugaring
|  { desugarInt :: MonadThings m =  Integer -  m CoreExpr
|  , desugarWord :: MonadThings m =  Integer -  m CoreExpr
...

I am not sure why you want to control the desugaring of literals.  Why 
literals?  And why is literals enough?

|  But I don't still understand what can I do with foreign
|  imports/exports. DsForeign module seems to be too complicated. As I
|  can see, I shouldn't make whole dsForeigns function replaceable, but I
|  can't understand what part of it should be replaceble.

I still think that the stub generation for foreign declarations should be 
easily separable.   The desugarer generates a certainly amount of unwrapping, 
but you'll want that for JavaScript too. The actual calling convention is 
embedded inside the Id: see the FCallId constructor of IdDetails in IdInfo.lhs, 
and the ForeignCall type in ForiegnCall.lhs.


There's a lot that's backend-specific about the way we desugar foreign 
import wrapper - calls to createAdjustor passing magic strings and 
suchlike.  It would be nice to identify the stuff that is 
backend-specific and separate it out, I think.


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.2.1 Release Candidate 1

2011-08-05 Thread Simon Marlow

On 05/08/2011 08:45, Jens Petersen wrote:

On 5 August 2011 05:27, Ian Lynaghig...@earth.li  wrote:

from http://koji.fedoraproject.org/koji/getfile?taskID=3251249name=build.log .

| *** unexpected failure for fed001(normal)

but it works fine for me on x86/Linux.


Note the Fedora build is patched to use system libffi.


Hmm. What happens if you don't patch it?


More hmmm: that makes the x86 unexpected errors go to 0!

http://koji.fedoraproject.org/koji/taskinfo?taskID=3253482

I attach system libffi patch if anyone wants to look at it,
but I don't see anything particularly arch-specific about it though so I still
don't understand why it fails for 7.2.  A similar patch for ghc-7.0.4
doesn't seem to have any ill-effects on the test results:

eg http://koji.fedoraproject.org/koji/buildinfo?buildID=248071

I'd be interested to know if any other distros can reproduces or not.
I think the system libffi patch originally comes from Debian.


This is surprising because I don't think ordinary FFI code should be 
even using libffi on x86 - we have our own implementation in 
rts/Adjustors.c.  In 7.2 there are some changes in this area because we 
now guarantee to keep the C stack pointer aligned on a 16-byte boundary 
(see http://hackage.haskell.org/trac/ghc/ticket/5250), and as a result 
we switched to using the Mac OS X implementation in rts/Adjustors.c 
which was already doing the necessary alignment.


You aren't setting UseLibFFIForAdjustors=YES anywhere, are you? (even if 
you were, I would expect it to still work though, albeit a bit more slowly).



It would be good to make Linux default to use system libffi anyway.
Is there any good reason not to do so?  As you know copy libraries
are frowned upon in the linux world.  I can't remember if such
an RFE already exists in trac?


I don't like having to do this, but it reduces our testing surface (we 
don't want to have to test against N different versions of libffi).  I'm 
quite happy for distros to build against their system libffi though, and 
we should make that easier.  Note that if you build against the system 
libffi you are responsible for fully testing the combination (I know you 
already do that, which is great).


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.2.1 Release Candidate 1

2011-08-05 Thread Simon Marlow

On 05/08/2011 12:08, Joachim Breitner wrote:

Hi,

Am Freitag, den 05.08.2011, 09:46 +0100 schrieb Simon Marlow:

I don't like having to do this, but it reduces our testing surface (we
don't want to have to test against N different versions of libffi).  I'm
quite happy for distros to build against their system libffi though, and
we should make that easier.  Note that if you build against the system
libffi you are responsible for fully testing the combination (I know you
already do that, which is great).


after this broad hint, I ran the full testsuite, getting this result on
Debian:

OVERALL SUMMARY for test run started at Fr 5. Aug 11:02:03 CEST 2011
 2894 total tests, which gave rise to
12535 test cases, of which
0 caused framework failures
 2327 were skipped

 9823 expected passes
  351 expected failures
1 unexpected passes
   33 unexpected failures

Unexpected passes:
concurrent/should_run  throwto002 (ghci)

Unexpected failures:
cabal  ghcpkg05 [bad stderr] (normal)
concurrent/prog002 concprog002 [exit code non-0] 
(threaded2,threaded2_hT)
concurrent/prog003 concprog003 [exit code non-0] 
(normal,hpc,optasm,profasm,threaded1,threaded2,dyn,profthreaded)
concurrent/prog003 concprog003 [bad stdout or stderr] (ghci)
concurrent/should_run  conc023 [exit code non-0] 
(normal,hpc,optasm,profasm,threaded1,threaded2,dyn,profthreaded)
concurrent/should_run  conc023 [bad stdout or stderr] (ghci)
ffi/should_run ffi009 [exit code non-0] 
(normal,hpc,optasm,profasm,threaded1,threaded2,dyn,profthreaded)
ffi/should_run ffi009 [bad stdout or stderr] (ghci)
pluginsplugins05 [exit code non-0] 
(profasm,dyn,profthreaded)

Without looking into details yet, or comparing it with the test suite in
7.0.4.


This result is fine.  Most of those failures have been cleaned up in 
HEAD already, but the changes haven't been merged into the branch yet.


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hsc2hs and #include

2011-08-08 Thread Simon Marlow

On 07/08/2011 02:18, Evan Laforge wrote:

On Sat, Jul 30, 2011 at 9:25 PM, Ian Lynaghig...@earth.li  wrote:

But I also think we may as well just remove most of these conditionals.
The GHC  4.09 tests can surely be removed, and likewise the GHC  6.3
tests. Personally I'd remove the GHC  6.10 test too, but perhaps that
will be more contentious.

Any opinions?


That was going to be my first suggestion.  Maybe the only reason these
are needed is that the hsc2hs binary itself isn't versioned, otherwise
you simply run the one that came with your ghc, and if it's for ghc-4
then it should be producing code ghc-4 understands.

So the problem would be with code that knows to specifically invoke an
older ghc, but still picks up the hsc2hs symlink which points to a
newer one.  I don't know of any framework for compiling with multiple
versions, but I'd think it should be smart enough to find the
appropriate ghc lib directory and run the various utilities out of
there.


So what's the consensus here?  Does dropping all backwards
compatibility from hsc2hs make sense?  Presumably it's there for a
reason so I may be missing something.

In any case, though I like the idea of dropping all the #ifdef, I
think the specific instance for omitting #includes is incorrect, and
I'm not sure why other people aren't seeing that.. I don't understand
what's going on with __GLASGOW_HASKELL__.  Maybe something funny with
my install?

Should I try to send a patch for the remove all backward compatibility
thing?  Or one for the specific #include problem I've been having?


I've lost track of all the details here.  But perhaps there's some 
historical cruft lying around because hsc2hs used to call GHC to compile 
its C files, and hence __GLASGOW_HASKELL__ would have been defined.


In fact, the GHC build system passes 
--cflag=-D__GLASGOW_HASKELL__=version to hsc2hs when it runs it. 
Maybe Cabal should do the same.  Does it?


The problem with making the INCLUDE pragma conditional is that you can 
only do conditional pragmas using CPP, which requires the CPP extension, 
and moreover older versions of GHC did not support conditional 
compilation of pragmas (I forget which version added this, maybe 6.12).


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.2.1 Release Candidate 1

2011-08-09 Thread Simon Marlow

On 09/08/2011 05:59, Jens Petersen wrote:


I'm quite happy for distros to build against their system libffi though, and we 
should
make that easier.  Note that if you build against the system libffi you are
responsible for fully testing the combination (I know you already do that,
which is great).


A configure option to enable system libffi would be very good.
Should I file an RFE for that?


yes, please do.

Cheers,
Simon


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hsc2hs and #include

2011-08-09 Thread Simon Marlow

On 09/08/2011 02:44, Evan Laforge wrote:


So the simplest thing to do is remove all the version stuff.  That
means that if you want to run hsc2hs with a version of ghc which is
not the one linked in /usr/bin, you also can't run the hsc2hs linked
in /usr/bin, but have to get the one out of the ghc directory.  If no
one has an objection to that then I'll try to make a patch with git
and put it in a ticket.

The next simplest thing to do is to just document that anyone calling
hsc2hs has to pass -D__GLASGOW_HASKELL_=version.  I don't like this so
much because it's error-prone, but doesn't require any code changes.
So... remove it all?  Yea / nay?


Yes, ok.

Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: What are the preconditions of newArray#

2011-08-23 Thread Simon Marlow
An Array# of size zero is a perfectly reasonable thing.  If it doesn't 
work, it should (and I vaguely recall making it work at some point in 
the past, but perhaps I failed to add a test and as a result it has 
rotted...)


Cheers,
Simon

On 22/08/11 17:08, Johan Tibell wrote:

I agree (unless it has a performance cost). I had to fix a couple of
bugs in my code associated with generating zero-length arrays.

On Mon, Aug 22, 2011 at 5:54 PM, Edward Kmettekm...@gmail.com  wrote:

It would still be nice to have a consistent base case.

On Mon, Aug 22, 2011 at 3:43 AM, Johan Tibelljohan.tib...@gmail.com
wrote:


On Mon, Aug 22, 2011 at 5:55 AM, Edward Z. Yangezy...@mit.edu  wrote:

stg_newArrayzh in rts/PrimOps.cmm doesn't appear to give any indication,
so this might be a good patch to add.  But I'm curious: what would
allocating Array#s of size 0 do? Null pointers? That sounds dangerous...


I would imagine that a zero sized array would be a StgArrPtrs header
with its size field set to 0. It's not a very useful thing to have, I
admit. If someone (Simon?) can confirm that we don't intend to support
zero-length array I'll push a patch that adds a comment.

Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users






___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: What are the preconditions of newArray#

2011-08-23 Thread Simon Marlow
You could make it a precondition of copyArray#, although that's slightly 
less pleasant from a user's perspective.


Cheers,
Simon

On 23/08/11 21:04, Johan Tibell wrote:

It could well be that it's some later primop that's failing due to the
empty size, like my new copyArray# primop. If that's the case I could
fix it but I would probably would have to add a branch to the
copyArray# primop, which I'm reluctant to do.

On Tue, Aug 23, 2011 at 9:47 PM, Simon Marlowmarlo...@gmail.com  wrote:

An Array# of size zero is a perfectly reasonable thing.  If it doesn't work,
it should (and I vaguely recall making it work at some point in the past,
but perhaps I failed to add a test and as a result it has rotted...)

Cheers,
Simon

On 22/08/11 17:08, Johan Tibell wrote:


I agree (unless it has a performance cost). I had to fix a couple of
bugs in my code associated with generating zero-length arrays.

On Mon, Aug 22, 2011 at 5:54 PM, Edward Kmettekm...@gmail.comwrote:


It would still be nice to have a consistent base case.

On Mon, Aug 22, 2011 at 3:43 AM, Johan Tibelljohan.tib...@gmail.com
wrote:


On Mon, Aug 22, 2011 at 5:55 AM, Edward Z. Yangezy...@mit.eduwrote:


stg_newArrayzh in rts/PrimOps.cmm doesn't appear to give any
indication,
so this might be a good patch to add.  But I'm curious: what would
allocating Array#s of size 0 do? Null pointers? That sounds
dangerous...


I would imagine that a zero sized array would be a StgArrPtrs header
with its size field set to 0. It's not a very useful thing to have, I
admit. If someone (Simon?) can confirm that we don't intend to support
zero-length array I'll push a patch that adds a comment.

Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users









___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Pkg-haskell-maintainers] libffi soname change upcoming

2011-08-25 Thread Simon Marlow

On 24/08/2011 13:12, Joachim Breitner wrote:

Hi,

Am Mittwoch, den 24.08.2011, 12:44 +0200 schrieb Matthias Klose:

The question that has to be answered first is: Assume the libraries do
not depend on libffi themselves, and only ghc does. Now you update
libffi and ghc gets rebuilds, what will happen:

  A) The haskell ABIs stay the same, the existing library packages can
still be used. Great.

  B) The haskell ABIs change. We’ll have to binNMU all Haskell libraries,
but oh well, not bad thanks to BD-Uninstallable-support in wanna-build
and autosigning.

  C) The haskell ABIs do not change, but the old library builds are
broken nevertheless. Big mess. Hard to recover from, because builds are
not ordered automatically any more. Needs lots of NMUes and Dep-Waits.


sorry, I don't get the `C' case. why should these be broken by a libffi or
libgmp change?


Maybe it’s an unrealistic example, but I could imagine that ghc some
data type (size) defined by libffi is used when generating code for a
haskell library under the assumption that it has the same structure/size
in the run time system and/or other used haskell libraries.

But instead of making blind guesses, maybe GHC upstream can enlighten
us: Is it safe to build ghc and a Haskell library, then upgrade libffi
to a new version (with soname bump), rebuild ghc, but use the previous
library build?


So there might be difficulties because we build static libraries.  E.g. 
the RTS would have been built against the previous libffi, but would 
then be linked against the new one, which might be ABI-incompatible. 
Shared libraries would notice the upgrade and use the old ABI, but 
static libraries won't.


How is this supposed to work, incidentally?  I just checked the Drepper 
document about shared libraries and he doesn't seem to mention this 
problem.  How do other packages with static libraries deal with this?


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel --make (GHC build times on newer MacBook Pros?)

2011-08-31 Thread Simon Marlow

On 30/08/2011 00:42, Thomas Schilling wrote:

The performance problem was due to the use of unsafePerformIO or other
thunk-locking functions.  The problem was that such functions can
cause severe performance problems when using a deep stack.  The
problem is that these functions need to traverse the stack to
atomically claim thunks that might be under evaluation by multiple
threads.

The latest version of GHC should no longer have this problem (or not
as severely) because the stack is now split into chunks (see [1] for
performance tuning options) only one of which needs to be scanned.
So, it might be worth a try to re-apply that thread-safety patch.

[1]: https://plus.google.com/107890464054636586545/posts/LqgXK77FgfV


I think I would do it differently.  Rather than using unsafePerformIO, 
use unsafeDupablePerformIO with an atomic idempotent operation.  Looking 
up or adding an entry to the FastString table can be done using an 
atomicModifyIORef, so this should be fine.


The other place you have to look carefully at is the NameCache; again an 
atomicModifyIORef should do the trick there.  In GHC 7.2.1 we also have 
a casMutVar# primitive which can be used to build lower-level atomic 
operations, so that might come in handy too.


Cheers,
Simon



On 29 August 2011 21:50, Max Bolingbrokebatterseapo...@hotmail.com  wrote:

On 27 August 2011 09:00, Evan Laforgeqdun...@gmail.com  wrote:

Right, that's probably the one I mentioned.  And I think he was trying
to parallelize ghc internally, so even compiling one file could
parallelize.  That would be cool and all, but seems like a lot of work
compared to just parallelizing at the file level, as make would do.


It was Thomas Schilling, and he wasn't trying to parallelise the
compilation of a single file. He was just trying to make access to the
various bits of shared state GHC uses thread safe. This mostly worked
but caused an unacceptable performance penalty to single-threaded
compilation.

Max

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users








___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel --make (GHC build times on newer MacBook Pros?)

2011-09-01 Thread Simon Marlow

On 01/09/2011 08:44, Evan Laforge wrote:

Yes, the plan was to eventually have a parallel --make mode.


If that's the goal, wouldn't it be easier to start many ghcs?


It's an interesting idea that I hadn't thought of.  There would have to 
be an atomic file system operation to commit a compiled module - 
getting that right could be a bit tricky (compilation isn't 
deterministic, so the commit has to be atomic).


Then you would probably want to randomise the build order of each --make 
run to maximise the chance that each GHC does something different.


Fun project for someone?

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel --make (GHC build times on newer MacBook Pros?)

2011-09-02 Thread Simon Marlow

On 01/09/2011 18:02, Evan Laforge wrote:

It's an interesting idea that I hadn't thought of.  There would have to be
an atomic file system operation to commit a compiled module - getting that
right could be a bit tricky (compilation isn't deterministic, so the commit
has to be atomic).


I suppose you could just rename it into place when you're done.
-Edward


I was imagining that it could create Module.o.compiling and then
rename into place when it's done.  Then each ghc would do a work
stealing thing where it tries to find output to produce that doesn't
have an accompanying .compiling, or sleeps for a bit if all work at
this stage is already taken, which is likely to happen since sometimes
the graph would go through a bottleneck.  Then it's easy to clean up
if work gets interrupted, just rm **/*.compiling


Right, using a Module.o.compiling file as a lock would work.

Another way to do this would be to have GHC --make invoke itself to 
compile each module separately.  Actually I think I prefer this method, 
although it might be a bit slower since each individual compilation has 
to read lots of interface files.  The main GHC --make process would do 
the final link only.  A fun hack for somebody?


Cheers,
Simon



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel --make (GHC build times on newer MacBook Pros?)

2011-09-05 Thread Simon Marlow

On 03/09/2011 02:05, Evan Laforge wrote:

Another way to do this would be to have GHC --make invoke itself to
compile each module separately.  Actually I think I prefer this method,
although it might be a bit slower since each individual compilation has
to read lots of interface files.  The main GHC --make process would do
the final link only.  A fun hack for somebody?


this would also help building large libraries on architectures with
little memory, as it seems to me that when one ghc instance is compiling
multiple modules in a row, some leaked memory/unevaluated thunks pile up
and eventually cause the compilation to abort. I suspect that building
each file on its own avoids this issue.


In my experience, reading all those .hi files is not so quick, about
1.5s for around 200 modules, on an SSD.  It gets worse with a pgmF, since ghc
wants to preprocess each file, it's a minimum of 5s given 'cat' as a
preprocessor.

Part of my wanting to use make instead of --make was to avoid this
re-preprocessing delay.  It's nice that it will automatically notice
which modules to recompile if a CPP define changes, but not so nice
that it has to take a lot of time to figure that out every single
compile, or for a preprocessor that doesn't have the power to change
whether the module should be recompiled or not.


Ah, but you're measuring the startup time of ghc --make, which is not 
the same as the work that each individual ghc would do if ghc were 
invoked separately on each module, for two reasons:


 - when used in one-shot mode (i.e. without --make), ghc only reads
   and processes the interface files it needs, lazilly

 - the individual ghc's would not need to proprocess modules - that
   would only be done once, by the master process, before starting
   the subprocesses.  The preprocessed source would be cached,
   exactly as it is now by --make.

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: secret of light-weight user thread

2011-09-06 Thread Simon Marlow

On 06/09/2011 09:47, Kazu Yamamoto (山本和彦) wrote:


Recently I exchanged information about user threads with Ruby
community in Japan.

The user threads of Ruby 1.8 are heavy weight and Ruby 1.9 switched to
kernel threads. The reason why user threads of Ruby 1.8 are heavy
weight is *portability*. Since Ruby community does not want to prepare
assembler to change stack pointers for each supported CPU
architecture, Ruby 1.8 copies the stack of user threads on context
switch.

Because I need to explain why the user threads of GHC are light
weight, I gave a look at GHC's source code and found the
loadThreadState function in compiler/codeGen/StgCmmForeign.hs. In this
function, the stack pointer is changed in the Cmm level.

So, my interpretation is as follows: Since GHC has Cmm backend, it is
easy to provide assembler to change stack pointers for each supported
CPU architecture. That's why GHC can provide light weight user
threads.

Is my understanding correct?


There are a couple of reasons why GHC's threads are cheaper than OS 
threads, it's not really to do with the Cmm backend:


 - We have an accurate GC, which means that the Haskell stack can be
   movable, whereas the C stack isn't.  So we can start with small
   stacks and enlarge them as necessary.

 - We only preempt threads at safe points.  This means that the
   context we have to save at a context switch is platform-independent
   and typically much smaller than the entire CPU context.  Safe
   points are currently on allocation, which is frequent enough in GHC
   for this to be indistinguishable (most of the time) from true
   preemption.

User-space threads are often dismissed because of the problems with 
implementing concurrent foreign calls.  We have a nice solution for this 
problem in GHC that I think is probably under-appreciated in the wider 
language community:


  http://community.haskell.org/~simonmar/papers/conc-ffi.pdf

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: secret of light-weight user thread

2011-09-07 Thread Simon Marlow

On 07/09/2011 08:13, Kazu Yamamoto (山本和彦) wrote:

Simon,

Thank you for explanation.


  - We have an accurate GC, which means that the Haskell stack can be
movable, whereas the C stack isn't.  So we can start with small
stacks and enlarge them as necessary.


What is the difference between the Haskell stack and the C stack?
I guess that the C stack means CPU's stack. Is the Haskell stack a
virtual stack for a virtual machine (STG machine or something)?


There's no fundamental difference between the Haskell stack and the C 
stack, they are both just runtime data structures used by compiled code. 
 We designed the Haskell stack so that pointers within it can be 
identified by the GC, that's all.


When running Haskell code there's a register that points to the top of 
the Haskell stack, just like when running C code (it's a different 
register, but in principle there's no reason it has to be different).



I quickly read several papers but I have no idea.


  - We only preempt threads at safe points.  This means that the
context we have to save at a context switch is platform-independent
and typically much smaller than the entire CPU context.  Safe
points are currently on allocation, which is frequent enough in GHC
for this to be indistinguishable (most of the time) from true
preemption.


I seems to me that StgRun saves CPU registers. You mean that StgRun
depends on CPU a bit but the rest part of context is CPU independent?


StgRun is the interface between the C world and the Haskell world, which 
have different ABIs.  In particular, the C ABI requires that function 
calls do not modify certain registers (the callee-saves registers), 
whereas in Haskell there are no callee-saves registers. So StgRun saves 
the callee-saves registers while running Haskell code, that's all.  It 
may have to do other things depending on what specific conventions are 
used by C or Haskell code on the current platform.


This is just something we have to do so that we can call Haskell code 
from C.  It's not related to threads, except that the GHC scheduler is 
written in C so we have to go through StgRun every time we start or stop 
a Haskell thread.


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Is there a non-blocking version of hGetArray?

2004-10-01 Thread Simon Marlow
On 01 October 2004 08:45, Peter Simons wrote:

 I am a happy user of hGetBufNonBlocking, but I have come to
 realize that mutable arrays are nicer to work with than
 pointers, so I have considered using hGetArray instead. I
 do, however, depend on the fact that the function returns as
 soon as it has read data -- even if less than requested --,
 like hGetBufNonBlocking does.
 
 Is there currently a way to achieve this?

Not currently, but I could probably implement the equivalent
(hGetArrayNonBlocking).

 Am I right assuming that hGetBuf and hGetArray do not differ
 much performance-wise?

Hopefully not.

 One of the reasons I am curious about using mutable arrays
 is because of Data.Array.Base.unsafeRead, which seems to be
 a *lot* faster than accessing the memory through a pointer.
 Is there anything comparable for pointer access?

I'm surprised if pointer access to memory is slower than unsafeRead.
Could you post some code that we can peer at?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Is there a non-blocking version of hGetArray?

2004-10-05 Thread Simon Marlow
On 02 October 2004 13:04, Tomasz Zielonka wrote:

 On Fri, Oct 01, 2004 at 09:34:36PM +0100, Simon Marlow wrote:
 
 Not currently, but I could probably implement the equivalent
 (hGetArrayNonBlocking).
 
 It is perhaps not closely related, but could we also have
 Network.Socket recvFrom / sendTo working on raw buffers?
 
 I've attached a proposed implementation. It moves most of code to
 recvBufFrom and sendBufTo, and changes recvFrom / sendTo to use the
 *Buf* functions.

Committed, thanks!
 
 It would be nice if these functions could be used to implement
 efficient recvFromArray / sendToArray (without copying), but I don't
 know if it's possible to get the pointer from MutableByteArray. Is
 there a danger that GC invalidates the pointer?

It is possible to get a Ptr from a MutableByteArray, but only if the
array was allocated pinned, and only if you make sure it lives across
any foreign calls (using touch#).  This is how Foreign.alloca works, for
example.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: symbol __stg_split_marker' is already defined

2004-10-05 Thread Simon Marlow
On 02 October 2004 01:22, John Goerzen wrote:

 rm -f GHC/Base.o; if [ ! -d GHC/Base_split ]; then mkdir
 GHC/Base_split; else /o pt/freeware/bin/find GHC/Base_split -name
 '*.o' -print | xargs rm -f __rm_food; fi;
 ../../ghc/compiler/ghc-inplace -H16m -O -fglasgow-exts -cpp -Iinclude
 -#include HsBase.h -funbox-strict-fields -package-name base -O
   -Rghc-timing  -split-objs -c GHC/Base.lhs -o GHC/Base.o  -ohi
 GHC/Base.hi /tmp/ghc82602.hc:2593: warning: this decimal constant is
 unsigned only in ISO C9 0
 /tmp/ghc82602.s: Assembler messages:
 /tmp/ghc82602.s:128: Error: symbol `__stg_split_marker' is already
 defined /tmp/ghc82602.s:287: Error: symbol `__stg_split_marker' is
 already defined /tmp/ghc82602.s:448: Error: symbol
 `__stg_split_marker' is already defined 
 
 And about a hundred more.  Ideas?

You probably need to add some support for your platform to the split
script in ghc/driver/split/ghc-split.lprl and/or the mangler in
ghc/driver/mangler/ghc-asm.lprl.  For now, you can turn off splitting as
Don suggested.

 Oh, BTW, I had to put a script named gcc on my path.  It says this:
 
 exec /usr/local/bin/gcc -mpowerpc -maix32 -mminimal-toc $@

These options should go in machDepCCOpts in
ghc/compiler/main/DriverFlags.hs, and then you can do away with the gcc
script.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Using packages in ghci

2004-10-05 Thread Simon Marlow
I think what's happening here is that you essentially have overlapping
.hi files: the interface for Data.Tree.AVL.List is found along the
search path, and also in a package (because the search path . and the
import_dirs for the package point to the same place).

This is apparently confusing GHC, but I don't know exactly how.  We
could fix the problem by checking whether a given module can be found
in more than one place, but that would involve searching every package
for every module, which might slow things down.  When we store the list
of visible modules with a package we'll be able to do this much more
easily.

Cheers,
Simon

On 02 October 2004 08:42, Adrian Hey wrote:

 On Friday 01 Oct 2004 9:36 pm, Simon Marlow wrote:
 Looks fine - GHCi is supposed to look in the directories in
 import_dirs 
 for .hi files.  What does ghci -v tell you?
 
 Quite a lot :-), but nothing very surprising. I think I've found what
 causes the problem. It does actually seem to work as expected,
 provided the current directory is not what it usually is when I'm
 working on the library. 
 
 I.E. /home/adrian/HaskellLibs/Data.Tree.AVL
 
 This is what I get
 
 Prelude :m Data.Tree.AVL
 Prelude Data.Tree.AVL asTreeL ABCD
 
 interactive:1:
tcLookup: `asTreeL' is not in scope
In the definition of `it': it = asTreeL ABCD
 
 Failed to load interface for `Data.Tree.AVL.List':
Bad interface file: ./Data/Tree/AVL/List.hi
./Data/Tree/AVL/List.hi: openBinaryFile: does not exist (No
 such file or directory) 
 
 Failed to find interface decl for `asTreeL'
 from module `Data.Tree.AVL.List'
 
 But if I cd to..
 
 I.E. /home/adrian/HaskellLibs/Data.Tree.AVL/pkg
 
 ..it works fine. I've since discovered it also seems to work fine from
 from any other directory too. So it seems to something peculiar about
 the one particular directory that upsets it.
 
 This is with version 6.2.20040915, but 6.2.1 did the same thing IIRC.
 
 Regards

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: BeOS port

2004-10-05 Thread Simon Marlow
On 03 October 2004 02:14, Donald Bruce Stewart wrote:

 Waldemar.Kornewald:
 Hi,
 is it possible to use a simpler build system for GHC? :)
 
 It isn't so bad. It seems to be quite portable :)

We *are* using a simpler build system for GHC.  You should have seen the
last one :-)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: threadsafe needed

2004-10-05 Thread Simon Marlow
On 05 October 2004 10:16, John Meacham wrote:

 Quite a few foreign calls in the library are missing threadsafe in
 their declarations. if this could be fixed by 6.2.2 that would be
 great. in particular
 system, rawSystem, and DNS lookups are important to be able to do
 concurrently.

Thanks, I've fixed those.  We don't have threadsafe by the way; only
safe/unusafe.  Safe also means threadsafe.

 Also, is there plans for a threadunsafe? which seems like it would be
 a much more common case than threadsafe and should be much easier to
 implement since there is no chance the haskell runtime can be called
 from a thread it didn't expect.

Not sure what you mean by threadunsafe.  Is it different from plain
unsafe?

We *want* the RTS to be able to be called by threads it doesn't expect.
Otherwise how do you implement a thread-safe library API in Haskell?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bools are not unboxed

2004-10-05 Thread Simon Marlow
On 03 October 2004 14:07, Tomasz Zielonka wrote:

 Then I noticed the cause:
 GHC.Prim.# returns a boxed, heap allocated Bool, and so do other
 primitive comparison operators.
 
 Would it be difficult to add Bool unboxing to GHC?
 Maybe it would suffice to use preallocated False and True?

Just to clarify a little more:  although the raw primitive operations do
appear to return fully boxed True  False values, in practice they
rarely do, because the code genrator optimises

  case a =# b of
 True - e1
 False - e2

to 

  if (a =# b) { 
.. code for e1 ...
  } else { 
.. code for e2 ..
  }

(well, more or less).  Only if the result of the comparison is actually
being returned from a function does it get turned into a real Bool.

It would probably be better to return 0#/1# instead of a Bool from the
comparison primops, because this would expose slightly more detail to
the simplifier and might result in slightly better code in some cases
(but no dramatic improvements).  It would also let us remove a bit of
complexity from the code generator.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: hWaitForInput and timeouts

2004-10-05 Thread Simon Marlow
On 03 October 2004 19:34, Peter Simons wrote:

 I have another I/O problem. I need to time out when a Handle
 blocks forever. I am using hWaitForInput anyway, so that
 shouldn't be a problem, but the documentation says that
 using this feature will block all IO threads? Is it much
 work to fix this? I _could_ forkIO a racer thread myself, of
 course, but it feels wrong to do that around a function that
 has an explicit timeout argument. :-)

I've fixed it so it'll work in the threaded RTS.  However, forking a
Haskell thread to do the threadDelay might still be quicker.

In the unthreaded RTS there isn't an easy fix, I'm afraid.  The reason
is that we don't currently have a way to block a thread on both I/O
*and* a timeout.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: AIX 5.1L Build of GHC 6.2.1

2004-10-05 Thread Simon Marlow
On 04 October 2004 17:19, John Goerzen wrote:

 On 2004-10-02, John Goerzen [EMAIL PROTECTED] wrote:
 to help from people here, I have built a working GHC 6.2.1 for
 AIX5.1L.  (The last GHC I could find for AIX was GHC 2.09!)
 
 As a follow-up question: what does it take to get this listed at
 http://www.haskell.org/ghc/download_ghc_621.html?  I bet there is
 someone else out there that would find it useful.

Done.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bools are not unboxed

2004-10-06 Thread Simon Marlow
On 06 October 2004 00:53, John Meacham wrote:

 On Tue, Oct 05, 2004 at 01:48:30PM +0100, Simon Marlow wrote:
 It would probably be better to return 0#/1# instead of a Bool from
 the comparison primops, because this would expose slightly more
 detail to the simplifier and might result in slightly better code in
 some cases (but no dramatic improvements).  It would also let us
 remove a bit of complexity from the code generator.
 
 This seems like it could be nicely generalized such that all
 enumeration types unbox to the unboxed integer of their offset. so
 
 data Perhaps = Yes | Maybe | No
 
 can unbox to an Int# with 0# == Yes 1# == Maybe and 2# == No.

Yes, a strict enumeration should be implemented as an Int#, both in the
strictness analyser and also when you {-# UNPACK #-} a constructor
field.  This is something we'd like to try, but haven't got around to it
yet.  Maybe a good bite-sized project for a budding GHC hacker? :-)

 Then we get the Bool optimization for free.

The original question was about the primitive comparisons, so I think
we'd still have to change the types of these primitives.  Furthermore
we'd probably have to teach the compiler that the result of the
comparison primops is compatible with a strict Bool.  It wouldn't be
entirely free.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bools are not unboxed

2004-10-06 Thread Simon Marlow
On 06 October 2004 11:36, Josef Svenningsson wrote:

 Simon Marlow wrote:
 
 On 06 October 2004 00:53, John Meacham wrote:
 
 
 
 This seems like it could be nicely generalized such that all
 enumeration types unbox to the unboxed integer of their offset. so
 
 data Perhaps = Yes | Maybe | No
 
 can unbox to an Int# with 0# == Yes 1# == Maybe and 2# == No.
 
 
 
 Yes, a strict enumeration should be implemented as an Int#, both in
 the strictness analyser and also when you {-# UNPACK #-} a
 constructor field.  This is something we'd like to try, but haven't
 got around to it yet.  Maybe a good bite-sized project for a budding
 GHC hacker? :-) 
 
 
 
 Would it really be correct to translate it to Int#? AFAIK, unboxed
 values may not contain bottom while a data type most certainly can. I
 would imagine translating it to Int, and then relying on GHC's
 optimiser to optimize this into Int# whenever possible.

Note I said a *strict* enumeration.  You're right that in general it
wouldn't be correct to implement Bool by Int#.  Only when the strictness
analyser has determined that a function argument of enumeration type is
strict, or the programmer has added a strictness annotation to a
constructor field.

Certainly right now you can use Int everywhere instead of enumeration
types, and perhaps get better performance because GHC will unbox the Int
whenever possible.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: ghc from CVS HEAD doesn't work with -O -threaded

2004-10-20 Thread Simon Marlow
On 19 October 2004 17:08, Peter Simons wrote:

 The following reproducibly fails:
 
   $ darcs get http://cryp.to/hsdns  [*]
   $ cd hsdns/
   $ hsc2hs ADNS.hsc
   $ ghc -threaded -Wall -O --make test.hs -o test -ladns
   | Chasing modules from: test.hs
   | Compiling ADNS ( ./ADNS.hs, ./ADNS.o )
   | /tmp/ghc2613.hc:9:23: ADNS_stub.h: No such file or directory
   | /tmp/ghc2613.hc: In function `s8Xa_ret':
   | /tmp/ghc2613.hc:6340: error: `ADNS_d7eN' undeclared (...)
 
 If you build the program without optimization,
 
   $ ghc -threaded --make test -ladns
 
 it works just fine. This seems to happen only when the
 threaded RTS in involved, -O without -threaded works.

Should be fixed now.  Sorry for the delay in getting around to this.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Top level mutable data structures problem

2004-10-20 Thread Simon Marlow
On 20 October 2004 14:36, Adrian Hey wrote:

 [Excuse me for moving this discussion to the ghc mailing list,
 but it seems the appropriate place, seeing as ghc is where
 any solution will happen first in all probability.]
 
 I've noticed that the neither of the two Simons has expressed an
 opinion re. the discussions on this issue that occurred on the
 Haskell mailing list over the weekend. So I'd like to ask what's
 happening (or likely to happen) about this, if anything?

I liked the original idea.  I'm not sure if I agree with the argument
that allowing fully-fledged IO actions in the initialisation of a module
is unsafe.  I agree that it is a little opaque, in the sense that one
can't easily tell whether a particular init action is going to run or
not.  

On the other hand, instances currently have the same problem: you
can't tell what instances are in scope in your module without looking
through the transitive closure of modules you import, including
libraries, which you might not have source code for.

The proposed scheme wouldn't allow forcing an ordering on init actions
between two modules (eg. initialise the network library, then initialise
the HTTP library).

In any case, we're not going to rush to implement anything.  Discuss it
some more ;-)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Mutable hash?

2004-10-25 Thread Simon Marlow
On 23 October 2004 19:25, Lauri Alanko wrote:

 On Thu, Oct 21, 2004 at 09:17:20AM -0400, Robert Dockins wrote:
 There is a hashtable in the IO monad:
 

http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data.HashTabl
e.html
 
 Why is it in IO instead of the more general ST? IMHO _all_ mutable
 data structures should be written for ST (or a generalization
 thereof), since one can always use stToIO if operation in the IO
 monad is really required.

Because I'm lazy, and I don't use ST very often.  Patches are welcome,
as usual :-)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Change in ghc-6.2.2 distribution files

2004-10-25 Thread Simon Marlow
Sorry I should have broadcast a message about this; I mentioned it to
the *BSD guys because I knew they'd be affected, but forgot about
darwinports.  Sorry about that.

Cheers,
Simon

On 23 October 2004 20:27, Gregory Wright wrote:

 Hi Sven,
 
 Yes, that would be it. The change is harmless enough.
 
 Thanks for the pointer to the message.
 
 Greg
 
 On Oct 23, 2004, at 3:01 PM, Sven Panne wrote:
 
 Gregory Wright wrote:
 Did the file ghc-6.2.2.tar.bz2 get changed without the version
 number being changed? The md5 sum of the files has changed,
 breaking the darwinports and presumably the *BSD ports builds as
 well. 
 I didn't see any notice to the list, so I'm not sure if the change
 is intentional, or if the wrong file is being distributed.
 
 Hmmm, I guess it's this:
 
http://www.haskell.org//pipermail/cvs-ghc/2004-October/022173.html
 
 Cheers,
S.
 
 
 ___
 Glasgow-haskell-users mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Are handles closed automatically when they fall out of scope?

2004-10-25 Thread Simon Marlow
On 22 October 2004 21:58, Peter Simons wrote:

 I know it's a rather mundane question, but I couldn't find
 an answer to it!
 
 So what does happen when I forget to hClose a Handle? Will
 the garbage collector do that for me? Or not?

Yes, a Handle will be automatically closed sometime after it becomes
unreferenced.  However, the party line is don't rely on this
behaviour, because it is inherently unpredictable, and if you get it
wrong you can end up running out of file descriptors.  hClose explicitly
when you can.

 And more specifically, what about the handles
 runInteractiveProcess returns? Do I have to close the
 stdin Handle? All of them? What happens when I use
 terminateProcess? Do I have to hClose them nonetheless?

The stdin handle is attached to a pipe, and you get the behaviour you
expect when you close the write end of a pipe: if a process tries to
read the other end of the pipe, it will get EOF.  After
terminateProcess, if you write to the stdin handle, you're likely to get
SIGPIPE on Unix.

(BTW, I assume you have a good reason for wanting to call
terminateProcess).

 And while I am at it: How about Socket? Do I have to sClose
 a socket I obtained from listenOn or accept?

A Socket isn't finalized automatically (that is, you need explicit
sClose). However, if you use socketToHandle, then the Handle will be
finalized, and hence the socket closed, when it becomes unreachable.

On 24 October 2004 23:37, John Goerzen wrote:

 * What happens when one Handle corresponding to a socket is closed,
   but another isn't?

You shouldn't have two Handles on the same socket.  This is an unchecked
error.

 * What happens when one gets GC'd but another doesn't?

See above.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Are handles closed automatically when they fall out of scope?

2004-10-25 Thread Simon Marlow
On 25 October 2004 14:24, John Goerzen wrote:

 On 2004-10-25, Simon Marlow [EMAIL PROTECTED] wrote:
 On 22 October 2004 21:58, Peter Simons wrote:
 
 On 24 October 2004 23:37, John Goerzen wrote:
 
 * What happens when one Handle corresponding to a socket is closed,
   but another isn't?
 
 You shouldn't have two Handles on the same socket.  This is an
 unchecked error.
 
 This does seem useful, though.  I am actually doing this in my code
 and it works.  One Handle is opened ReadOnly, the other WriteOnly. 
 That way, I can use hGetContents on the reading side in my network
 code. 
 
 If I tried that with a single Handle opened ReadWrite, then I'd get
 errors about it being closed whenever I'd try to write out some data.
 
 I wasn't able to find any other good way around it.

Hmmm, you should still be able to *write* to a socket handle that has
had hGetContents applied to it.  In GHC, a ReadWrite handle to a socket
basically consists of a wrapper around two independent Handles, one for
read and one for write, each with its own buffer.

... I just tested it with 6.2.2, and indeed it does work as I expected.
But perhaps there's a bug lurking somewhere?

If you do socketToHandle twice, then the danger is that one of the
Handles will be finalized and close the FD before the other Handle has
finished with it.

In 6.4 you'll be able to use hDuplicate for this, BTW.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: runInteractiveProcess is broken

2004-10-27 Thread Simon Marlow
I can't repeat this, it works here:

*Main test1
ExitSuccess
*Main test2
Just ExitSuccess

(after changing /usr/bin/sleep to /bin/sleep).

The only thing I can think of is that you somehow have a SIGCHLD handler
that calls wait(), but I don't see where that can be happening.  GHC
doesn't have any other mechanism for calling wait(), so I don't
understand how the zombies are disappearing before waitForProcess is
called.

Maybe run it through strace and send us the output?

Cheers,
Simon

On 26 October 2004 19:00, Peter Simons wrote:

 Neither of these functions returns the return code of the
 external process as promised:
 
   import System.IO hiding ( catch, try )
   import System.Process
   import Control.Concurrent
 
   sleep :: Int - IO ()
   sleep n = threadDelay (abs(n) * 100)
 
   test1 :: IO ()
   test1 = do
 (_,_,_, pid) - runInteractiveProcess /usr/bin/sleep [1]
 Nothing Nothing sleep 5
 rc - waitForProcess pid
 print rc
 
   -- *Main test1
   -- *** Exception: waitForProcess: does not exist (No child
 processes) 
 
   test2 :: IO ()
   test2 = do
 (_,_,_, pid) - runInteractiveProcess /usr/bin/sleep [1]
 Nothing Nothing sleep 5
 rc - getProcessExitCode pid
 print rc
 
   -- *Main test2
   -- Nothing
 
 I'm using the ghc from CVS-HEAD on Linux/x86.
 
 Peter
 
 ___
 Glasgow-haskell-users mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Process library and signals

2004-10-27 Thread Simon Marlow
My apologies if I misinterpreted your comments.  There appear to be some
use cases and conventions here that I'm not altogether familiar with.

So basically you're saying that if runProcess is to be used in a
system()-like way, that is the parent is going to wait synchronously for
the child, then the parent should be ignoring SIGQUIT/SIGINT.  On the
other hand, if runProcess is going to be used in a popen()-like way,
then the parent should not be ignoring SIGQUIT/SIGINT.  The current
interface doesn't allow for controlling the behaviour in this way.

So the current signal handling in runProcess is wrong, and should
probably be removed.  What should we have instead?  We could implement
the system()-like signal handling for System.Cmd.system only, perhaps.

Cheers,
Simon

On 26 October 2004 23:38, Glynn Clements wrote:

 Having looked at the latest version of the Process library, it appears
 that my earlier comments about signal handling may have been
 misinterpreted.
 
 First, my comments regarding the handling of SIGINT/SIGQUIT were
 specific to system(). The C system() function ignores these signals in
 the parent while the child is executing. However, this doesn't
 necessarily apply to other functions; e.g. popen() doesn't ignore
 these signals, and runProcess probably shouldn't either.
 
 With system(), the parent blocks until the child is finished, so if
 the user presses Ctrl-C to kill the currently executing process,
 they probably want to kill the child. If the parent wants to die on
 Ctrl-C, it can use WIFSIGNALED/WTERMSIG to determine that the child
 was killed and terminate itself.
 
 OTOH, with popen(), the parent continues to run alongside the child,
 with the child behaving as a slave, so the parent will normally want
 to control the signal handling.
 
 Ideally, system() equivalents (e.g. system, rawSystem) would ignore
 the signals in the parent, popen() equivalents (e.g.
 runInteractiveProcess) wouldn't, and lower-level functions (e.g.
 runProcess) would give you a choice.
 
 Unfortunately, there is an inherent conflict between portability and
 generality, as the Unix and Windows interfaces are substantially
 different. Unix has separate fork/exec primitives, with the option to
 execute arbitrary code between the two, whilst Windows has a single
 primitive with a fixed set of options.
 
 Essentially, I'm not sure that a Windows-compatible runProcess would
 be sufficiently general to accurately implement both system() and
 popen() equivalents on Unix. Either system/rawSystem should be
 implemented using lower-level functions (i.e. not runProcess) or
 runProcess needs an additional option to control the handling of
 signals in the child.
 
 Also, my comment regarding the signals being reset in the child was
 inaccurate. system() doesn't reset them in the sense of SIG_DFL. It
 sets them to SIG_IGN before the fork(), recording their previous
 handlers. After the fork, it resets them in the child to the values
 they had upon entry to the system() function (i.e. to the values they
 had before they were ignored). The effect is as if they had been set
 to SIG_IGN in the parent after the fork(), but without the potential
 race condition.
 
 Thus, if they were originally ignored in the parent before system()
 was entered, they will be ignored in the child. If they were at their
 defaults (SIG_DFL) before system() was entered, they will be so in the
 child. If they had been set to specific handlers, system() will
 restore those handlers in the child, but then execve() will reset them
 to SIG_DFL, as the handler functions won't exist after the execve().

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Patchy GHC support for a week

2004-10-28 Thread Simon Marlow
Just to let everyone know that Simon PJ and myself will be away for the
rest of this week and the next, and will probably have intermittent
network connectivity, so it might take us a while to respond to
messages.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: happy and OPTIONS pragmas

2004-11-08 Thread Simon Marlow
On 04 November 2004 18:21, Ian Lynagh wrote:

 However, if a .y file starts:
 
 {
 {-# OPTIONS -w #-}
 -- Foo
 {-# OPTIONS -w #-}
 module Parser (parse) where
 }
 
 then the generated .hs file starts:
 
 -- parser produced by Happy Version 1.14
 -- Foo
 {-# OPTIONS -w #-}
 module Parser (parse) where
 
 so the pragma before my comment has been eaten and the 2 comments mean
 that ghc doesn't see the pragma, so gives me all the warnings anyway.
 
 Can happy be changed so my pragma gets through please?

Thanks, will be fixed in the next Happy release (fix is in CVS).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: proposal for ghc-pkg to use a directory of .conf files

2004-11-08 Thread Simon Marlow
On 06 November 2004 10:10, Sven Panne wrote:

 Duncan Coutts wrote:
 I can knock up a proof of concept patch if anyone thinks this is a
 good idea. It should be totally backward compatible, it's ok to use
 both, but ditro packagers might like to enforce a policy of using a
 directory of package files for external libraries.
 
 OK, just send us a patch and if there are no objections we can merge
 it into the HEAD.

In some ways this looks like a good idea, but it contradicts some of the
ideas in the Cabal proposal.  There, we were treating the package
database as an abstract entity hidden behind the ghc-pkg interface.  All
interaction with the database would be done via ghc-pkg.

The advantages of this abstraction are the usual ones: we might want to
change the representation, and the ghc-pkg tool provides a good place to
add backwards compatibility if necessary.

However, I'm prepared to be persuaded.  The just put a file in this
directory approach to installation is very compelling, being much more
transparent.  But bear in mind that if we pick this route, then
backwards compatibility has to be built into the file format (I think it
might be already, but we're planning changes in this area to better
support Cabal).  

Also, there needs to be a way to find the location to install the file -
asking ghc or ghc-pkg is the usual way.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: foreign export stdcall on user-defined type and function

2004-11-08 Thread Simon Marlow
On 08 November 2004 02:49, David Lo wrote:

 I'm new in haskell. I'm assigned to see that a piece of Haskell code
 need to be callable from C#. I find that I can convert Haskell to DLL
 using ghc --mk-dll.
 
 I find it is fine for simple function but for the following errors are
 reported.
 
 foreign export stdcall doComp :: Pattern - CString - IO ()
 doComp :: Pattern - CString - IO ()
 
 foreign export stdcall evalDisplay :: RealFrac a = CString -
 (Int-VMC (Maybe a)) - IO ()
 evalDisplay :: RealFrac a = CString - (Int-VMC (Maybe a)) - IO ()

It would help if you posted the entire code, rather than just a snippet.
We can't tell for sure where the errors are without seeing the
definitions for some of the types you've used, for example.

 The errors are as follows:
 
 Compiling Main ( CPL.hs, interpreted )
 
 CPL.hs:29:
 Unacceptable argument type in foreign declaration:
 forall a b c. (Ind a c, Compare c b) = Pat a b
 When checking declaration:
 foreign export stdcall doComp doComp :: Pattern- CString
 - IO () 

The error indicates that you have declared a foreign exported function
with a type that does not have a direct translation into a C type.  The
legal types for foreign exported functions are described in the FFI
specification, which can be found online here:

  http://www.cse.unsw.edu.au/~chak/haskell/ffi/ffi.pdf

 CPL.hs:107:
 Unacceptable argument type in foreign declaration: {RealFrac a}
 When checking declaration:
 foreign export stdcall evalDisplay evalDisplay :: forall a.
 (RealFrac a) = CString- (Int - VMC (Maybe a)) - IO ()
 
 CPL.hs:107:
 Unacceptable argument type in foreign declaration:
 Int - VMC (Maybe a)
 When checking declaration:
 foreign export stdcall evalDisplay evalDisplay :: forall a.
 (RealFrac a) =CString- (Int - VMC (Maybe a)) - IO ()
 
 I would like to inquire on how to use foreign function stdcall on self
 defined data structure and function. Can I just simply cast them to
 string ?

Perhaps you could give more details about what you want to do.  What
Haskell functions do you want to export to C#, and what types do they
have (both in Haskell and C#)?  If you want to export Haskell data
structures to C#, then you have to marshal the data into C#, by
converting the Haskell representation into a representation that C# can
understand.  Using strings is one possibility, but it's unlikely to be
the best.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: proposal for ghc-pkg to use a directory of .conf files

2004-11-09 Thread Simon Marlow
On 08 November 2004 18:47, Duncan Coutts wrote:

 We can use ghc-pkg at the build / install-into-temp phase to create
 the $(package).conf files under
 $TMP_INSTALL_ROOT/usr/lib/ghc-$VER/package.conf.d/ and then final
 installation is jsut merging files without any post-install calls to
 ghc-pkg to modify installed files (ie the global ghc package.conf
 file) 
 
 So we can still keep the abstraction of $HC-pkg and gain simpler
 packaging stuff.

Ok, sounds reasonable.

I'm going to be working on the package support in ghc and ghc-pkg to
improve support for Cabal, so let's do this at the same time.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Looing for advice on profiling

2004-11-09 Thread Simon Marlow
On 09 November 2004 12:54, Duncan Coutts wrote:

[snip]
 When I do time profiling, the big cost centres come up as putByte and
 putWord. When I profile for space it shows the large FiniteMaps
 dominating most everything else. I originally guessed from that that
 the serialisation must be forcing loads of thunks which is why it
 shows up so highly on the profile. However even after doing the
 deepSeq before serialisation, it takes a great deal of time, so I'm
 not sure what's going on.

let's get the simple things out of the way first: make sure you're
compiling Binary with -O -funbox-strict-fields (very important).  When
compiling for profiling, don't compile Binary with -auto-all, because
that will add cost centres to all the small functions and really skew
the profile.  I find this is a good rule of thumb when profiling: avoid
-auto-all on your low-level libraries that you hope to be inlined a lot.

You say your instances are created using DrIFT - I don't think we ever
modified DrIFT to generate the right kind of instances for the Binary
library in GHC, so are you using the instances designed for the nhc98
binary library?  If so, make sure your instances are using put_ rather
than put, because the former will allow binary output to run in constant
stack space.

Are you using BinMem, or BinIO?

 The retainer profiling again shows that the FiniteMaps are holding on
 to most stuff.
 
 A major problem no doubt is space use. For the large gtk/gtk.h, when I
 run with +RTS -B to get a beep every major garbage collection, the
 serialisation phase beeps continuously while the file grows.
 Occasionally it seems to freeze for 10s of seconds, not dong any
 garbage collection and not doing any file output but using 100% CPU,
 then it carries on outputting and garbage collecting furiously. I
 don't know how to work out what's going on when it does that.

I agree with Malcolm's conjecture: it sounds like a very long major GC
pause.

 I don't understand how it can be generating so much garbage when it is
 doing the serialisation stuff on a structure that has already been
 fully deepSeq'ed.

Yes, binary output *should* do zero allocation, and binary input should
only allocate the structure being created.  The Binary library is quite
heavily tuned so that this is the case (if you compile with profiling
and -auto-all, it will almost certainly break this property, though).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Looing for advice on profiling

2004-11-10 Thread Simon Marlow
On 09 November 2004 17:04, Duncan Coutts wrote:

 Are you using BinMem, or BinIO?
 
 BinIO

Ah.  BinIO is going to be a lot slower than BinMem, because it does
an hPutChar for each character, whereas BinMem just writes into an
array.  I never really optimised the BinIO path, because we use BinMem
exclusively in GHC.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: proposal for ghc-pkg to use a directory of .conf files

2004-11-10 Thread Simon Marlow
On 09 November 2004 17:36, Isaac Jones wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 
 On 08 November 2004 18:47, Duncan Coutts wrote:
 
 We can use ghc-pkg at the build / install-into-temp phase to create
 the $(package).conf files under
 $TMP_INSTALL_ROOT/usr/lib/ghc-$VER/package.conf.d/ and then final
 installation is jsut merging files without any post-install calls to
 ghc-pkg to modify installed files (ie the global ghc package.conf
 file) 
 
 So we can still keep the abstraction of $HC-pkg and gain simpler
 packaging stuff.
 
 Ok, sounds reasonable.
 
 I'm going to be working on the package support in ghc and ghc-pkg to
 improve support for Cabal, so let's do this at the same time.
 
 As a Debian packager, I like the idea of changing the way HC-PKG
 handles individual packages.
 
 The question in my mind is whether we want to execute any code on the
 install target.  Previously, I have thought of ./setup register as
 being a step that happens on the target, no matter what.  So if Marcus
 Makefile wants to do something specifically for the target at install
 time, this is where he could do it.
 
 If we go this route and have the package registration happen at
 install-in-temp time, then we don't have any standard way to run a
 post-install script.  Some people may prefer that we never execute
 anything from Cabal on the target, but I would prefer to leave that
 ability.
 
 One solution would be to move the registration step into
 install-into-temp time, as above, but to add another standard command
 to Cabal like ./setup postinstall and maybe some others preinst,
 prerm, postrm as in Debian.
 
 This would solve both problems; haskell packages installed with a
 packaging system like Debian would usually just be moving files into
 place, but if Marcus or Angela really needed to run something on the
 target, this is how they'd do it.

If ./setup register isn't going to run at install time, then I agree we
might want ./setup postinst too.

There's another thing that bothers me though: when you install a package
using hc-pkg, a number of checks are made:

 1. there isn't already a package with that name/version

 2. If the package is to be exposed, then the modules provided by the
package don't overlap with another exposed package.

 3. if an older version of the package is already exposed, then
the older one is supposed to be hidden in favour of the new one

Since with the proposed change hc-pkg isn't running on the target
system, it can't make any of these tests.  GHC can detect at run-time
that you have overlapping packages, but then it might not be possible to
make changes to the package database (you might need to 'su' in order to
do it).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Compiling Haskell on an UltraSparc/NetBSD

2004-11-16 Thread Simon Marlow
On 15 November 2004 21:16, Stephane Bortzmeyer wrote:

 [Not subscribed to haskell-users so please copy me the answers.]
 
 To compile the revision control system Darcs (http://www.darcs.net/),
 I need Haskell but I'm not myself a Haskell user.
 
 My machine is an UltraSparc 10 running NetBSD 1.6.2 userland and 2.0
 kernel.
 
 There is a package source for Glasgow Haskell, so, let's try in
 /usr/pkgsrc:
 
 % make
 ...
 checking build system type... sparc64-unknown-netbsd2.0.
 checking host system type... sparc64--netbsd
 checking target system type... sparc64--netbsd
 Unrecognised platform: sparc64--netbsd
 gmake: Entering directory
 `/usr/pkgsrc/lang/ghc/work/ghc-6.2.1/glafp-utils'
 ../mk/boilerplate.mk:66: ../mk/config.mk: No such file or directory
 You haven't run ./../configure yet.
 gmake: *** [../mk/config.mk] Error 1

There isn't an existing build for your platform, which means you'll have
to bootstrap.  Also, it looks like a small amount of porting effort will
be required (the configure script hasn't recognised the platform, so at
least you'll need to add a few obvious lines in there).

Full porting instructions are here:

http://www.haskell.org/ghc/docs/latest/html/building/sec-porting-ghc.htm
l

Bootstrapping generally works quite smoothly, but if you run into
problems we're always happy to help.  Send us any patches you end up
needing.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [Haskell] package with ghc and ghci

2004-11-16 Thread Simon Marlow
[ moved to [EMAIL PROTECTED] ]

On 16 November 2004 07:51, Fred Nicolier wrote:

 I have some packages for  doing  signal and  image processing stuff.
 Here is a little test  program :
 
 \begin{code}
 module Main where
 import Hips
 a = listSignal (1,10) [1..10]
 b = liftSignals (:+) a a
 c = fft b
 main = do
putStrLn $ show a
putStrLn $ show b
putStrLn $ show c
 \end{code}
 
 1/ Compiled with : ghc -package hips testFFT.hs
 2/ interpreted with : ghci-package hips testFFT.h
 
 1/ no problem
 2/ dont execute and gives 'unknown package name: =Numeric' (Numeric is
 another package called by Hips, included in HaskellDSP).

Please send the output of command (2) with -v added to the command line.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bug in touchForeignPtr?

2004-11-22 Thread Simon Marlow
On 20 November 2004 23:02, Benjamin Franksen wrote:

 I am using Foreign.Concurrent.newForeignPtr and touchForeignPtr
 inside the finalizers to express liveness dependencies as hinted to
 by the documentation. This doesn't seem to work, though, or at least
 I can't see what I've done wrong. I attached a test module; compiled
 with ghc -fglasgow-exts --make TestForeignTouchBug.hs, ghc version
 6.2.2, this gives 
 
 .../foreigntouchbug  ./a.out
 hit enter here
 before finalizing A
 after finalizing A
 before finalizing B
 after finalizing B
 hit enter here
 
 I expected the order of the finalizer calls be be the other way
 around, since the finalizer for the Bs explicitly touches the A value.

The problem is that the runtime is running all outstanding finalizers at
the end of execution, without regard for GC dependencies like the ones
introduced by touchForeignPtr.

I've been planning to remove this automatic running of finalizers for
other reasons.   However, then you will get absolutely no guarantee that
your finalizer will ever run at all (indeed, the property isn't always
true right now, but it is usually true).

Let me share with you something that I've come to appreciate over the
last few years:

  Finalizers are almost always not the right thing.

Finalizers look terribly attractive, but often lead to a huge can of
worms - best avoided if at all possible.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: unlit/ghci does not work on DOS file

2004-11-22 Thread Simon Marlow
On 18 November 2004 20:31, Christian Maeder wrote:

 calling unlit on a DOS file fails, whereas hugs is able to process the
 same file (under unix).
 
 Christian
 
 Prelude readFile Test.lhs = putStrLn . show
 \r\n module Test where\r\n\r\n
 Prelude :l Test.lhs
 Test.lhs line 2: unlit: Program line next to comment
 phase `Literate pre-processor' failed (exitcode = 1)

Thanks, fixed.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: proposal for ghc-pkg to use a directory of .conf files

2004-11-22 Thread Simon Marlow
On 21 November 2004 00:56, Isaac Jones wrote:

 The systems that would want to do this kind of thing, such as Debian,
 have other mechanisms for deciding whether packages conflict, etc.

IIRC, this is the argument I just used against adding support for
multiple libraries in Cabal, so I guess I agree :-D
 
 Over-all I'm kinda neutral about whether HC-pkg needs to be an opaque
 interface to the packaging system.  What are the advantages to this?

Well, for one thing it allows us flexibility in how we store the package
database.  In GHC, I'm using the show/read form of
[InstalledPackageInfo] to store the database, but it'd be nice if I
could use binary serialisation in the future.

To support a directory of config files, we don't have to expose the
complete format, though.  As long as hc-pkg can process the
InstalledPackageInfo to produce the native format into a file, then we
just ship that file with the distribution.  So I'm fine with this, as
long as we're not specifying the contents of the *.conf file.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bug in touchForeignPtr?

2004-11-23 Thread Simon Marlow
On 22 November 2004 17:28, Benjamin Franksen wrote:

 I understand that there are situations where finalizers cannot be
 guaranteed to run: First because of an unconditional termination
 signal (SIGKILL), second because of circular dependencies resulting
 in a deadlock. 
 
 I don't understand why it is necessary to performGC explicitly, in
 order to run finalizers at *normal* program termination and without a
 deadlock. 

It isn't necessary to performGC explicitly.  However, you might be
seeing a secondary effect due to Handles being finalised at program
termination too - a common problem is writing a finaliser that tries to
output something on a Handle, where the Handle gets finalised first.
The original finaliser will then deadlock (or in 6.2.2 it'll get a more
informative exception).

This is because at program termination we just run all the outstanding
finalisers without regard to ordering.  Ordering is too hard to
establish, and would at the least require a full GC before running each
finaliser.

 BTW, the sensible thing to do in this case would be to throw an
 exception whenever a deadlock condition is detected. (That is, if it
 can be detected.) 

Yes, GHC does cause exceptions to be raised on deadlock.

 However, what I don't understand is why touchForeignPtr is not
 honored in my example program: Note that the output text lines from
 the finalizers appear *before* the last action in the program (which
 is a second getChar). The finalizers *are* called by the GC, and
 still the order is wrong. 

Note that the GC only starts the finaliser thread.  The program can
still terminate before this thread has run to completion (this is one
reason why we say that finalisers don't always run before program
termination).

You have a point that the documentation is plain wrong.  I'll try to fix
it up for 6.4, and I think at the same time I'll remove the at-exit
finalisation.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [Haskell] package with ghc and ghci

2004-11-23 Thread Simon Marlow
I'm afraid that hasn't helped.  GHC appears to be looking for a package
'=DSP', but there's no indication why.

Could you give us enough instructions to be able to reproduce the
problem here, please?  Including:

  - full source code, or where to get it
  - complete command lines
  - platform, GHC version

Cheers,
Simon

On 17 November 2004 12:37, Fred Nicolier wrote:

 Simon Marlow wrote:
 
 
 Please send the output of command (2) with -v added to the command
 line. 
 
 Cheers,
  Simon
 
 
 
 I have isolated the dependancies of the packages : the work file is
 now : \begin{code}
 module Main where
 import Data.Array
 import DSP.Filter.FIR.FIR
 import DSP.Filter.FIR.Sharpen
 import DSP.Source.Basic
 n :: Int
 n = 1000
 h :: Array Int Double
 h = listArray (0,16) [ -0.016674..]
 y1 = fir h $ impulse
 
 main = do
 putStrLn $ show (head y1)
 \end{code}
 
 Here is the output of 'ghci -v -package DSP test.hs' :
 ghci -v -package DSP test.hs
 ___ ___ _
 / _ \ /\ /\/ __(_)
 / /_\// /_/ / / | | GHC Interactive, version 6.2.1, for Haskell 98.
 / /_\\/ __ / /___| | http://www.haskell.org/ghc/
 \/\/ /_/\/|_| Type :? for help.
 
 Glasgow Haskell Compiler, Version 6.2.1, for Haskell 98, compiled by
 GHC version 6.2.1
 Using package config file: /usr/local/lib/ghc-6.2.1/package.conf
 
  Packages 
 Package
 {name = data,
 auto = False,
 import_dirs = [/usr/local/lib/ghc-6.2.1/hslibs-imports/data],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries = [HSdata],
 extra_libraries = [],
 include_dirs = [],
 c_includes = [],
 package_deps = [haskell98, lang, util],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts = [],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = DSP,
 auto = False,
 import_dirs = [/usr/local/lib/imports/HDsp],
 source_dirs = [],
 library_dirs = [/usr/local/lib],
 hs_libraries = [HSDSP],
 extra_libraries = [],
 include_dirs = [],
 c_includes = [],
 package_deps = [data, Polynomial, Matrix, Numeric],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts = [],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = base,
 auto = True,
 import_dirs = [/usr/local/lib/ghc-6.2.1/imports],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries = [HSbase],
 extra_libraries = [HSbase_cbits],
 include_dirs = [],
 c_includes = [HsBase.h],
 package_deps = [rts],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts = [],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = rts,
 auto = False,
 import_dirs = [],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries = [HSrts],
 extra_libraries = [m, gmp],
 include_dirs = [/usr/local/lib/ghc-6.2.1/include],
 c_includes = [Stg.h],
 package_deps = [],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts =
 [-u,
 GHCziBase_Izh_static_info,
 -u,
 GHCziBase_Czh_static_info,
 -u,
 GHCziFloat_Fzh_static_info,
 -u,
 GHCziFloat_Dzh_static_info,
 -u,
 GHCziPtr_Ptr_static_info,
 -u,
 GHCziWord_Wzh_static_info,
 -u,
 GHCziInt_I8zh_static_info,
 -u,
 GHCziInt_I16zh_static_info,
 -u,
 GHCziInt_I32zh_static_info,
 -u,
 GHCziInt_I64zh_static_info,
 -u,
 GHCziWord_W8zh_static_info,
 -u,
 GHCziWord_W16zh_static_info,
 -u,
 GHCziWord_W32zh_static_info,
 -u,
 GHCziWord_W64zh_static_info,
 -u,
 GHCziStable_StablePtr_static_info,
 -u,
 GHCziBase_Izh_con_info,
 -u,
 GHCziBase_Czh_con_info,
 -u,
 GHCziFloat_Fzh_con_info,
 -u,
 GHCziFloat_Dzh_con_info,
 -u,
 GHCziPtr_Ptr_con_info,
 -u,
 GHCziPtr_FunPtr_con_info,
 -u,
 GHCziStable_StablePtr_con_info,
 -u,
 GHCziBase_False_closure,
 -u,
 GHCziBase_True_closure,
 -u,
 GHCziPack_unpackCString_closure,
 -u,
 GHCziIOBase_stackOverflow_closure,
 -u,
 GHCziIOBase_heapOverflow_closure,
 -u,
 GHCziIOBase_NonTermination_closure,
 -u,
 GHCziIOBase_BlockedOnDeadMVar_closure,
 -u,
 GHCziIOBase_Deadlock_closure,
 -u,
 GHCziWeak_runFinalizzerBatch_closure,
 -u,
 __stginit_Prelude,
 -L/usr/local/lib],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = haskell98,
 auto = True,
 import_dirs = [/usr/local/lib/ghc-6.2.1/imports],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries = [HShaskell98],
 extra_libraries = [],
 include_dirs = [],
 c_includes = [],
 package_deps = [base],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts = [],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = haskell-src,
 auto = True,
 import_dirs = [/usr/local/lib/ghc-6.2.1/imports],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries = [HShaskell-src],
 extra_libraries = [],
 include_dirs = [],
 c_includes = [],
 package_deps = [base, haskell98],
 extra_ghc_opts = [],
 extra_cc_opts = [],
 extra_ld_opts = [],
 framework_dirs = [],
 extra_frameworks = []}
 Package
 {name = network,
 auto = True,
 import_dirs = [/usr/local/lib/ghc-6.2.1/imports],
 source_dirs = [],
 library_dirs = [/usr/local/lib/ghc-6.2.1],
 hs_libraries

RE: optimized compilation fails with gcc 3.4.3 under solaris

2004-11-23 Thread Simon Marlow
On 17 November 2004 14:01, Christian Maeder wrote:

 [EMAIL PROTECTED] - uname -a
 SunOS leo 5.8 Generic_117000-05 sun4u sparc SUNW,Sun-Fire-280R
 [EMAIL PROTECTED] - ghc --version
 The Glorious Glasgow Haskell Compilation System, version 6.2.1
 [EMAIL PROTECTED] - gcc -v
 Reading specs from

/export/software/mirror/sparc-solaris/lang/bin/../lib/gcc/sparc-sun-sola
ris2.8/3.4.3/specs
 Configured with: ../gcc-3.4.3/configure --prefix=/usr/local/lang
 -program-suffix=_3.4.3 --with-as=/usr/ccs/bin/as
 --with-ld=/usr/ccs/bin/ld --enable-version-specific-runtime-libs
 --enable-languages=c,c++,f77 --enable-shared=libstdc++ --disable-nls
 Thread model: posix
 gcc version 3.4.3
 [EMAIL PROTECTED] - ghc --make Main.hs -O
 Chasing modules from: Main.hs
 Compiling Main ( Main.hs, Main.o )
 /tmp/ghc14393.hc: In function `__stginit_Main':
 /tmp/ghc14393.hc:5: note: if this code is reached, the program will
 abort /tmp/ghc14393.hc: In function `__stginit_ZCMain':
 /tmp/ghc14393.hc:12: note: if this code is reached, the program will
 abort /tmp/ghc14393.hc: In function `Main_a_entry':
 /tmp/ghc14393.hc:33: note: if this code is reached, the program will
 abort /tmp/ghc14393.hc: In function `Main_main_slow':
 /tmp/ghc14393.hc:49: note: if this code is reached, the program will
 abort /tmp/ghc14393.hc: In function `Main_main_entry':
 /tmp/ghc14393.hc:64: note: if this code is reached, the program will
 abort /tmp/ghc14393.hc: In function `ZCMain_main_slow':
 /tmp/ghc14393.hc:79: note: if this code is reached, the program will
 abort Linking ...
 [EMAIL PROTECTED] - ./a.out
 Illegal Instruction

Ah yes, I remember this.  I think it was my bug report which caused the
gcc folks to implement that warning message :-)  GCC 3.4 is being
terribly helpful by taking a rather extreme interpretation of the term
undefined behaviour in the C99 spec to mean abort.

Unfortunately it looks like I haven't patched GHC to work around it.  I
patched my tree on the sourceforge Sparc machine I was using, but I
can't access that right now (perhaps my account expired).

IIRC, it was a simple change to the type cast in the definition of the
JMP_ macro in ghc/includes/TailCall.h.  Maybe change StgFunPtr to
StgFun?  You'll know when you get it right, because the warning will go
away.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Bug in touchForeignPtr?

2004-11-23 Thread Simon Marlow
On 23 November 2004 13:46, Keean Schupke wrote:

 Simon Marlow wrote:
 
 Note that the GC only starts the finaliser thread.  The program can
 still terminate before this thread has run to completion (this is one
 reason why we say that finalisers don't always run before program
 termination). 
 
 
This sounds like a bug to me... surely you should wait for all
 forked threads to finish before the RTS exits.

No, the fact that GHC doesn't wait for all forked threads before
terminating is the intended behaviour.  If you want anything else, you
can implement it.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: CWString API

2004-11-30 Thread Simon Marlow
On 30 November 2004 09:35, John Meacham wrote:

 On Tue, Nov 30, 2004 at 12:41:04AM -0800, Krasimir Angelov wrote:
Hello guys,
 
 I am working on updated version of HDirect and now I
 am going to use CWString API to marshal (wchar_t *)
 type to String. I found some inconsistencies in the API.
   - castCWcharToChar and castCharToCWchar functions
 are defined only for Posix systems and they aren't
 exported. In the same time castCCharToChar and
 castCharToCChar have the same meaning and they are
 defined and exported on all platforms.
 
 The problem is that these operations are very unsafe, there is no
 guarenteed isomorphism or even injection between wchar_ts and Chars.
 If people really know what they are doing, they can do the conversion
 themselves via fromIntegral/ord/chr, but I don't think we should
 encourage such unsafe usage with functions when it is simple for the
 user to work around it themselves.

That's right - castCWcharToChar and its dual are unlikely to be correct
on Windows, where wchar_t is UTF-16.

However, AFAICS the whole Windows API works in terms of UTF-16, only
dealing with surrogate pairs in the text output routines.  So it might
sometimes be more convenient and efficient, but not strictly speaking
correct, to do no conversion between a UTF-16 value and Haskell's Char
in the FFI on Windows.  I think we want to provide an interface that
lets you do this if you know what you're doing.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: building cygwin

2004-12-06 Thread Simon Marlow
On 04 December 2004 01:49, Ben Kavanagh wrote:

 I'm going to create a standard dist for cygwin.
 
 In order to do so, according to the porting guide I need to build a
 set of .hc files with mingw32(same hardware) and then use hc-build
 with those. To create the hc files from mingw32 for use with cygwin I
 should just have to define toplevel build.mk as follows is that
 correct? 
 
 
 GhcLibHcOpts += -keep-hc-files
 GhcStage1HcOpts += -keep-hc-files
 GhcStage2HcOpts += -keep-hc-files

Yes, I think so.

 and then use 'find' to pull out all of the .hc files.

The target hc-file-bundle in the top-level Makefile should do the
right thing.

 After that It should be fairly simple to get a Cygwin build going
 right? -Ben

I doubt it'll be trivial - probably lots of 

  #ifdef mingw32_TARGET_OS

will need to change to 
 
  #if defined(mingw32_TARGET_OS) || defined(cygwin_TARGET_OS)

but there shouldn't be any major new code to write.  I'd proceed by
grepping for mingw32_TARGET_OS and checking each one to see whether it
should change, otherwise you could end up with hard-to-find bugs that
show up months later.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: A question about the awkward squad

2004-12-06 Thread Simon Marlow
On 04 December 2004 04:27, Judah Jacobson wrote:

 What exactly are redex's, in this context,

Any expression which can be beta-reduced or case-reduced.

 and is it (still?) true that GHC never expands them?

I'm not sure if GHC guarantees never to duplicate a redex, Simon PJ
might know.

 Or are there certain types of redex's
 that aren't?  Or is it just really complicated? :-)  For example, if I
 understand this right, does it mean that in the classic top-level
 unsafePerformIO+NOINLINE hack, the NOINLINE is actually unnecessary in
 some or all cases?

Probably, yes.  The reason being that currently unsafePeformIO itself is
marked NOINLINE, so GHC can't see its definition, so it won't ever
duplicate an expression of the form (unsafePerformIO e) because that
would duplicate an unbounded amount of work.  The NOINLINE pragma is
useful documentation, though.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: did util.memo vanish?

2004-12-06 Thread Simon Marlow
On 06 December 2004 06:03, Adam Megacz wrote:

 Hrm, is the GHC magic memo function still around?  In 5.0.4 it was
 in util, but I can't seem to find it in 6.2.2.

It's still there in module Memo, in the util package.  It's scheduled
for demolition in 6.6.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: A question about the awkward squad

2004-12-06 Thread Simon Marlow
On 06 December 2004 13:25, Simon Peyton-Jones wrote:

 and is it (still?) true that GHC never expands them?
 
 I'm not sure if GHC guarantees never to duplicate a redex, Simon PJ
 might know.
 
 Yes, it's very careful not to duplicate a redex, except for ones of
 known bounded size, like x +# y, where sharing the work costs more
 than duplicating int.  

I seem to recall that redexes of known bounded size includes

   case z of (a,b) - e

right?  We sometimes push case expressions inside lambdas to bring
lambdas together, for example.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: building cygwin

2004-12-07 Thread Simon Marlow
On 06 December 2004 21:31, Sven Panne wrote:

 Hmmm, having some fragile OS-dependent #ifdefs is not the way to go.
 While you are there, every
 
 #ifdef mingw32_TARGET_OS
 
 which needs to be changed should be replaced by something
 feature-specific like
 
 #if HAVE_FOO_BAR_FUNCTION

Good point, thanks Sven.

Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: did util.memo vanish?

2004-12-08 Thread Simon Marlow
On 08 December 2004 09:10, Adam Megacz wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 It's still there in module Memo, in the util package.  It's scheduled
 for demolition in 6.6.
 
 Huh... why?  It's pretty convenient, especially if you're aware of the
 relevant GC issues and don't mind them  although I do wish that
 there were a version of type
 
   (Storable a, Eq a) = (a - b) - (a - b)
 
 that would use hash-the-bitserialized-representation equality rather
 than pointer/stableName equality.

No particularly good reason; my impression is that it isn't used much,
performance isn't that great, and it's only of limited applicability
(i.e. when you want to use pointer equality rather than any other kind
of equality over keys).  It needs to be moved into the hierarchical libs
somewhere, of course.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Reading/Writing sockets concurrently

2004-12-10 Thread Simon Marlow
On 10 December 2004 10:55, Mårten Dolk wrote:

 Btw, you import GHC.IO and not System.IO. What is the diffrence
 between those two?

System.IO is guaranteed to be there in the next release :-)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Stack overflow machine dependent?

2004-12-13 Thread Simon Marlow
On 14 December 2004 10:46, Arjen van Weelden wrote:

 I compiled a Haskell 98 program using ghc 6.2 -O and ran the
 executable, using main +RTS -H256m -s, on two similar computers:
 
 PC 1: Athlon 1400, 512MB, Windows XP Prof SP2 (successful termination)
 PC 2: Athlon XP 1800+, 512MB, Windows XP Prof SP2 (stack overflow)
 
 The program runs successful within 1MB of stack on PC 1, but it exits
 with a stack overflow error on PC 2.
 
 Has anyone else observed such behaviour using the same binary, and
 similar computers? I'm curious of what might trigger the stack
 overflow. The program itself is partially generated and large, so
 I'll omit it for the time being.

Is the program completely self-contained and deterministic?  i.e. does
it read any files in the filesystem, check the time, or do anything that
might give different results on the two machines?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: internal error: weird activation record found on stack: 9

2004-12-17 Thread Simon Marlow
On 16 December 2004 16:12, Peter Simons wrote:

 I'm getting this error in my software every now and then:
 
   postmaster: internal error: scavenge_stack: weird activation record
   found on stack: 9 Please report this as a bug to
   [EMAIL PROTECTED], or
 http://www.sourceforge.net/projects/ghc/ 
 
 The process runs just fine for several days, and then it
 crashes with this message out of the sudden. There is no
 apparent cause. I'm using a fairly recent CVS ghc to compile
 the program on Linux/x86.
 
 Any idea what I can do to help tracking this bug down?

Please compile the program with -debug, then open it with gdb.  Set a
breakpoint on barf() and run the program:

  gdb break barf
  gdb run

and wait for it to hit the breakpoint.  Then do

  gdb signal SIGABRT

to get a core dump.  Send the source, binary, and core dump to us.  With
any luck, we'll be able to track it down without running the program.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Reading/Writing sockets concurrently

2004-12-10 Thread Simon Marlow
On 09 December 2004 17:46, Mårten Dolk wrote:

 I'm trying to create a client application that needs to read and write
 data from a socket. What I want to do is to have one thread reading
 the socket and another thread writing to the same socket, but I can't
 get it to work.
 
 I know that there is a problem with having two threads reading from
 the same Handle (as in the   example), due to the locking. To
 avoid this blocking I open the socket with Network.Socket.socket and
 create two handles with socketToHandle, one in ReadMode and one in
 WriteMode. 

Actually you don't need to create two Handles to a socket - the socket Handle 
is a duplex Handle with two locks, so the read and write sides don't 
interfere with each other.

In fact, you shouldn't create two Handles to the same socket, because as soon 
as one of the finalizers runs it will close the socket.

 But the threads still seem to be blocking each other and the program
 runs very slowly. I am using channels to have the read and write
 threads communicate with other threads in the application and by
 removing a writeChan in one of these other threads the program
 crashes with a Fail: thread blocked indefinitely.

Looks like a deadlock of some kind.  Compiling with -debug and running with 
+RTS -Ds might give you some clues.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: ghc for hp-ux 11.11...

2005-01-04 Thread Simon Marlow
On 21 December 2004 17:38, Sven Panne wrote:

 [EMAIL PROTECTED] wrote:
 Okay, I've tried to follow the directions, and ran into a couple
 minor issues I was able to work through, but I got stuck at the cd
 H/libraries  make boot  make stage.  The host system is redhat
 enterprise linux 3, the target is hp-ux 
 11.11, the output is below: [...]
 
 It looks like you try to get GHC 6.2.2 up and running. Could you try
 the GHC from CVS HEAD instead, please? The configuration stuff has
 changed quite a bit, and I'm reluctant to work on the old STABLE
 branch. Could you send a log of the configuration/building plus all
 config.log and config.status files, please? In ancient times, I made
 GHC run on HP-UX 9, but I guess that things suffer from bit rot...

Looks like he has given up and submitted a feature request :-)  But
anyway, the HEAD is probably not in a good state for bootstrapping right
now (unless anyone has evidence to the contrary?)  I plan to make sure
bootstrapping works before we release 6.4.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: internal error: weird activation record found on stack: 9

2005-01-04 Thread Simon Marlow
On 03 January 2005 15:19, Peter Simons wrote:

 I wrote:
 
   Simon Marlow writes:
 
   Please compile the program with -debug, then open it
   with gdb.
 
   Unfortunately, -debug seems to conflict with -threaded:
 
 ghc --make -threaded -debug -O -Wall [...] -o postmaster
  tutorial.lhs [...]Chasing modules from: tutorial.lhs
 [...]
 Compiling Main ( tutorial.lhs, .objs/Main.o )
 Linking ...
 /usr/lib/gcc-lib/i686-pc-linux-gnu/bin/ld: cannot find
  -lHSrts_thr_debugcollect2: ld returned 1 exit status
 
 It has been a while since this problem came up, and I was
 wondering what to do now, because the software keeps
 crashing (or freezing) every few days.
 
 Any advice, anyone?

If your program still works without -threaded, then that's an option.
Otherwise, you'll need to compile up a local copy of the debug/threaded
RTS so you can use -debug and -threaded together.  The way to do that is
to add GhcRtsWays += thr_debug to mk/build.mk in a GHC tree, and build
as normal.You can then either use this GHC to build your app, or
just use the thr_debug version of the RTS with your existing GHC
installation by giving the appropriate -L option when linking (it better
be the same GHC version, of course).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Loading package ghc within GHCi

2005-01-06 Thread Simon Marlow
On 05 January 2005 22:20, Wolfgang Thaller wrote:

 The hook idea works with static linking: the RTS provides a default
 version of the hook, that can be overriden by a user-supplied
 function of the same name.  This is what GHC does.  However, our
 dynamic linker doesn't support this kind of overriding.  The
 system's dynamic linker does, though: that's why you can still
 provide your own malloc() and functions in libc.so will use yours
 rather than the default one. 
 
 Note that the Darwin and (AFAIK) Windows dynamic linker do not support
 this behaviour. They explicitly avoid that kind of behaviour to
 prevent accidental overriding. (What happens on Linux if you link a
 GHC-compiled program to a shared library that internally uses a
 function named allocate? What if the next revision of that library
 introduces such a function?)
 
 What are the alternatives to using these hook functions? Explicitly
 looking for a OutOfHeapHook symbol in the executable using dlsym and
 friends? Exporting a RegisterOutOfHeapCallback function from the
 rts? Both seem a bit inconvenient to me, but some change might be
 necessary when we use dylibs/dlls.

I'm not a big fan of the overriding semantics either.  The problem with
RegisterOutOfHeapCallback() is that you have to actually provide some
code that gets run to register the callback, which means you have to
provide your own main() (and therefore override the RTS-supplied main()
:-).

Can we use weak symbols for this?  Are weak symbols widely-supported
enough?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Loading package GHC in GHCi

2005-01-10 Thread Simon Marlow
On 09 January 2005 06:50, Sean Seefried wrote:

 I have managed to build package GHC *and* load it into GHCi. 
 Initially this did not work. When I loaded up ghci with the -package
 ghc flag I was assaulted with the following error message.
 
   GHCi runtime linker: fatal error: I found a duplicate definition for
 symbol
   _OutOfHeapHook
   whilst processing object file
   /Users/sseefried/cvs-ghc/PLUGGABLE/working/ghc/compiler/HSghc.o
   This could be caused by:
   * Loading two different object files which export the same symbol
   * Specifying the same object file twice on the GHCi command line
   * An incorrect `package.conf' entry, causing some object to be
   loaded twice.
   GHCi cannot safely continue in this situation. Exiting now. Sorry.
 
 I tracked down this symbol in the GHC source and found it in
 ghc/compiler/parser/hschooks.c.  The purpose of redefining the
 functions within this file is apparently to improve the quality of the
 error messages. That is, the symbols generated are meant to override
 those in the RTS. Unfortunately GHCi doesn't like this at all. At the
 moment it prohibits loading a symbol that is already in the RTS, which
 seems very reasonable from a certain perspective - I can see that a
 duplicate symbol would usually be an error. Except that in the case we
 really *do* want to load it so that it overrides the old one.
 
 The only solution I can come up with is a modification to the
 package.conf syntax so that one can specify symbols which are part of
 the RTS package and the package in question.  We could then modify the
 dynamic linker of GHC so that these symbols were removed from the
 RTS's symbol table.  The symbols would then be loaded back in again
 in with the package thus overriding the old symbols.
 
 What do you think of this proposal? I haven't implemented it yet
 because the alternative - that of simply removing the conflicting
 symbols from package ghc might be what you want.  My temporary fix for
 getting package ghc to load into GHCi is just this - I strip the
 symbols that overlap between the RTS and package ghc.  However, this
 means that the code that was supposed to be overridden in the RTS is
 no longer being overridden.  Although this works this doesn't seem to
 be what we want.

I'm not keen on adding new syntax to package.conf for this.  We
shouldn't rely on the overriding behaviour - it's non-portable anyway.

I think the best solution is for these hooks to be replaced by variables
containing function pointers, in the same way as the hooks in
RtsMessages.c.  The user program has to override main() in order to set
them, but this is done portably by using the -no-hs-main option to GHC,
and a standard main() is pretty small.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Unicode in GHC: need some advice on building

2005-01-11 Thread Simon Marlow
On 11 January 2005 02:29, Dimitry Golubovsky wrote:

 Bad thing is, LD_PRELOAD does not work on all systems. So I tried to
 put the code directly into the runtime (where I believe it should be;
 the Unicode properties table is packed, and won't eat much space). I
 renamed foreign function names in GHC.Unicode (to avoid conflict with
 libc functions) adding u_ to them (so now they are u_iswupper, etc).
 I placed the new file into ghc/rts, and the include file into
 ghc/includes. I could not avoid messages about missing prototypes for
 u_... functions , but finally I was able to build ghc. Now when I
 compiled my test program with the rebuilt ghc, it worked without the
 LD_PRELOADed library. However, GHCi could not start complaining that
 it could not see these u_... symbols. I noticed some other entry
 points into the runtime like revertCAFs, or getAllocations, declared
 in the Haskell part of GHCi just as other foreign calls, so I just
 followed the same style - partly unsuccessfully.
 
 Where am I wrong?

You're doing fine - but a better place for the tables is as part of the
base package, rather than the RTS.  We already have some C files in the
base package: see libraries/base/cbits, for example.  I suggest just
putting your code in there.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: 6.4 News

2005-01-12 Thread Simon Marlow
On 12 January 2005 12:49, Christian Maeder wrote:

 in a new version of ghc I've noticed that a colon in a path (as
 argument to -i) is no longer recognized. Will this also be the case
 in the new version ghc-6.4?

Thanks!  In fact I broke that by accident recently.  Now fixed.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: More HaXml trouble

2005-01-14 Thread Simon Marlow
On 14 January 2005 13:35, Peter Simons wrote:

 after rebuilding ghc-current, I got an intact Cabal version
 and managed to install HaXml successfully. However, when I
 try to link a program that actually uses the package, the
 linker stage fails with these errors:
  

/usr/local/ghc-current/lib/HaXml-1.12.1/libHSHaXml-1.12.1.a(Combinators.
o)(.text+0x3aa9):
 In function `__stginit_TextziXMLziHaXmlziCombinators_': : undefined
 reference to `__stginit_Maybe_'

/usr/local/ghc-current/lib/HaXml-1.12.1/libHSHaXml-1.12.1.a(Escape.o)(.t
ext+0x22b9):
 In function `__stginit_TextziXMLziHaXmlziEscape_': : undefined
 reference to `__stginit_Char_'

/usr/local/ghc-current/lib/HaXml-1.12.1/libHSHaXml-1.12.1.a(Generate.o)(
.text+0x37f1):
 In function `__stginit_TextziXMLziHaXmlziHtmlziGenerate_': :
 undefined reference to `__stginit_Char_' [...] collect2: ld returned
 1 exit status *** Deleting temp files Deleting:  
 
 I've tried adding -package base to the command line, but
 that didn't help. Any other idea?

Looks like the HaXml package spec is missing a dependency on haskell98.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Unicode in GHC: need more advice

2005-01-17 Thread Simon Marlow
On 14 January 2005 12:58, Dimitry Golubovsky wrote:

 Now I need more advice on which flavor of Unicode support to
 implement. In Haskell-cafe, there were 3 flavors summarized: I am
 reposting the table here (its latest version).
 
 |Sebastien's| Marcin's | Hugs
  ---+---+--+--
   alnum | L* N* | L* N*| L*, M*, N* 1
   alpha | L*| L*   | L* 1
   cntrl | Cc| Cc Zl Zp | Cc
   digit | N*| Nd   | '0'..'9'
   lower | Ll| Ll   | Ll 1
   punct | P*| P*   | P*
   upper | Lu| Lt Lu| Lu Lt 1
   blank | Z* \t\n\r | Z*(except| ' ' \t\n\r\f\v U+00A0
   U+00A0
   U+2007
   U+202F)
   \t\n\v\f\r U+0085
 
 1: for characters outside Latin1 range. For Latin1 characters
 (0 to 255), there is a lookup table defined as
 unsigned char   charTable[NUM_LAT1_CHARS];
 
 I did not post the contents of the table Hugs uses for the Latin1
 part. However, with that table completely removed, Hugs did not work
 properly. So its contents somehow differs from what Unicode defines
 for that character range. If needed, I may decode that table and post
 its mapping of character categories (keeping in mind that those are
 Haskell-recognized character categories, not Unicode)

I don't know enough to comment on which of the above flavours is best.
However, I'd prefer not to use a separate table for Latin-1 characters
if possible.

We should probably stick to the Report definitions for isDigit and
isSpace, but we could add a separate isUniDigit/isUniSpace for the full
Unicode classes.

 One more question that I had when experimenting with Hugs: if a
 character (like those extra blank chars) is forced into some category
 for the purposes of Haskell language compilation (per the Report),
 does this mean that any other Haskell application should recognize
 Haskell-defined category of that character rather than
 Unicode-defined? 

 For Hugs, there were no choice but say Yes, because both compiler and
 interpreter used the same code to decide on character category. In GHC
 this may be different.

To be specific: the Report requires that the Haskell lexical class of
space characters includes Unicode spaces, but that the implementation of
isSpace only recognises Latin-1 spaces.  That means we need two separate
classes of space characters (or just use the report definition of
isSpace).

GHC's parser doesn't currently use the Data.Char character class
predicates, but at some point we will want to parse Unicode so we'll
need appropriate class predicates then.

 Since Hugs got there first, does it make sense just follow what was
 done here, or will a different decision be adopted for GHC: say, for
 the Parser, extra characters are forced to be blank, but for the rest
 of the programs compiled by GHC, Unicode definitions are adhered to.

Does what I said above help answer this question?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: How expensive are MVars ?

2005-01-17 Thread Simon Marlow
On 13 January 2005 23:36, Nick Main wrote:

 I'm planning to implement a small OO language on top of GHC (think
 JavaScript) and need to decide on how to implement the mutable object
 graph that is required.
 
 The two approaches I'm considering are:
  - something on top of Data.Graph
  - using MVars as the object references.
 
 The MVar approach is the most appealing, since it would also allow the
 OO language to contain threads.  How expensive is an MVar access (in
 GHC), compared to the graph navigation that would be required to
 resolve a reference using Data.Graph ?
 
 I know this is a fairly nebulous question, but any comments or
 suggestions are appreciated.

So here's a nebulous answer: I don't know, measure it :-)

Each MVar operation involves a function call right now, so you might
class it as expensive.

Personally for your application, I think I'd use a mutable array to
represent the heap.  That amounts to almost the same as using
Data.Graph, but I imagine you'll need the mutability for speed.  Perhaps
providing a mutable Graph data structure implemented using an array
would be a nice abstraction.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [ ghc-Feature Requests-1104381 ] Add wxHaskell link to homepage

2005-01-18 Thread Simon Marlow
On 18 January 2005 14:52, Duncan Coutts wrote:

 While we're thinking about it, could a link to Gtk2Hs be added:
 http://gtk2hs.sourceforge.net/ . Our web page has been updated to be
 rather more current.

Done.

Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [ ghc-Feature Requests-1104381 ] Add wxHaskell link to homepage

2005-01-18 Thread Simon Marlow
On 18 January 2005 14:42, Ketil Malde wrote:

 I seem to be getting messages from Sourceforge from this mailing
 list.  Is that an intended use for ghc-users?

It's intentional, but it can be easily turned off.  Do people want to
see feature-requests, task-list entries and so forth on this mailing
list, or should they be confined to, say, [EMAIL PROTECTED]

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: debian ghci on sparc64

2005-01-18 Thread Simon Marlow
On 15 December 2004 14:46, William Lee Irwin III wrote:

 There seems to be some trouble with the debian ghci on sparc64. I can
 dredge up more information if given an idea of what to look for.
 
 $ ghci
___ ___ _
   / _ \ /\  /\/ __(_)
  / /_\// /_/ / /  | |  GHC Interactive, version 6.2.2, for
 Haskell 98. / /_\\/ __  / /___| |  http://www.haskell.org/ghc/
 \/\/ /_/\/|_|  Type :? for help.
 
 Loading package base ... linking ... done.
 zsh: 7796 segmentation fault  ghci

This turned out to be relatively straightforward.  Two fixes:
ghc-asm.lprl needs slight modifications for new versions of gcc on
sparc-*-linux, and the dynamic linker also needs to have USE_MMAP
enabled due to needing to execute dynamically allocated memory.

I've made these fixes in the HEAD, and also merged them into the 6.2
branch.  William: there's a fixed tree in /mnt/dm0/simonmar/ghc-6.2.2 on
your sparc64 box.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: [Haskell] [ANNOUNCE] New version of unicode CWString library withextras

2005-01-19 Thread Simon Marlow
On 19 January 2005 05:31, John Meacham wrote:

 A while ago I wrote a glibc specific implementation of the CWString
 library. I have since made several improvements:
 
 * No longer glibc specific, should compile and work on any system with
   iconv (which is unix standard) (but there are still glibc specific
   optimizations)
 * general iconv library for conversion to any other supported
   character sets
 * LocaleIO, a plug in replacement for many of the standard prelude and
   IO calls which transparently handle locale encoding.
 
 and best of all, it now has a darcs repository.
 
  http://repetae.net/john/recent/out/HsLocale.html
 
 It could still using some fleshing out, LocaleIO is still incomplete,
 I add to it as I need a function,  but I figure I should make it
 available in case the CWString stuff came in handy for implementing
 the FFI spec for ghc.

I'd like to get a correct CString implementation into GHC's libraries.
I think the CWString implementation we have now is good enough, but
CString should be doing locale encoding/decoding (as you know).  At the
same time, we should check all the withCString calls to see whether they
should really be withCAString (since withCString is about to get quite a
bit slower).

Would you be interested in helping with this, or even putting together a
patch?  It's probably too late for 6.4, though.

 PS. is there a way to replace the top level error handler in ghc?
 (from a haskell library) I'd like to be able to print the error
 messages with the LocaleIO library as it is the only place where the
 wrong encoding still can leak out.

There's no way to replace the handler, I'm afraid.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: debian ghci on sparc64

2005-01-19 Thread Simon Marlow
On 18 January 2005 21:01, William Lee Irwin III wrote:

 On 15 December 2004 14:46, William Lee Irwin III wrote:
 There seems to be some trouble with the debian ghci on sparc64. I
 can dredge up more information if given an idea of what to look for.
 
 On Tue, Jan 18, 2005 at 04:41:02PM -, Simon Marlow wrote:
 This turned out to be relatively straightforward.  Two fixes:
 ghc-asm.lprl needs slight modifications for new versions of gcc on
 sparc-*-linux, and the dynamic linker also needs to have USE_MMAP
 enabled due to needing to execute dynamically allocated memory.
 I've made these fixes in the HEAD, and also merged them into the 6.2
 branch.  William: there's a fixed tree in
 /mnt/dm0/simonmar/ghc-6.2.2 on your sparc64 box.
 
 Great, thanks for fixing it up! I can make similar arrangements for
 the alpha issue, if desired.

Are you referring to this bug?

  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=232727

If so, that's quite a bit of work - not 6.4 material I'm afraid.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: package installation

2005-01-19 Thread Simon Marlow
On 17 January 2005 22:53, Sean Bowman wrote:

 I'm trying to install HUnit to use with ghci and hugs and having some
 trouble.  It works if I use the -i option with ghci, but I'd rather
 not have to specify that on the command line every time.  Putting it
 in a ~/.ghci file doesn't seem to work.  How can I set the search path
 for 3rd party Haskell modules?

You have two choices: use the -i option, or make a package.  The right
way to use 3rd party libraries is to make a package.  We're making it
easier to build packages with Cabal (http:///www.haskell.org/cabal/).

Incedentally, GHC 6.4 will come with HUnit.  And I believe there's a
Cabal version floating around somewhere that should install with GHC
6.2.2.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Dynamic linking under Windows

2005-01-21 Thread Simon Marlow
On 21 January 2005 09:13, Santoemma Enrico wrote:

 I don't understand from the manual whether it is possible to have a
 short .exe file and the rest as a dynamic linking part, or not yet. 

Not at the moment.  It was possible in the past, and it might be
possible again shortly in the future: Wolfgang Thaller has been doing
lots of work on dynamic linking support, which will be in GHC 6.4.  It
probably won't be completely supported on Windows by 6.4, though.

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: profiling usage

2005-01-25 Thread Simon Marlow
On 24 January 2005 09:11, Serge D. Mechveliani wrote:

 -
 Mon Jan 24 10:30 2005 Time and Allocation Profiling Report  (Final)
 
  a.out +RTS -M40m -p -RTS
 
 total time  =9.04 secs   (452 ticks @ 20 ms)
 total alloc = 1,047,303,220 bytes  (excludes profiling overheads)
 
 COST CENTRE MODULE   %time %alloc
 
 nubCommut   Prelude5  90.0   88.8
 isReducibleAtTopReduce 9.19.8
 
   individualinherited
 COST CENTRE   MODULE  no.entries  %time %alloc   %time %alloc
 
 MAIN  MAIN1   0   0.00.0   100.0  100.0
 CAF   Main242 3   0.00.0 0.00.0
 ...
 ...
 operatorOccurs   Prelude3  250 19526   0.0  0.2   0.0  0.2
 concat in nextTermLevel  Prelude5  249   183   0.0  0.2  98.7 98.1
 isReducibleAtTop Reduce257  4525   8.4  9.0   8.4  9.0
 orderIfCommutPrelude5  256  7052   0.2  0.1   0.2  0.1
 nubCommutPrelude5  255  4582  90.0 88.8  90.0 88.8
 vectorsOfDepthOfSortsPrelude5  251   379   0.0  0.0   0.0  0.0
 vectorsUnderDepthOfSorts Prelude5  254   260   0.0  0.0   0.0  0.0
 isReducibleAtTop Reduce248   314   0.6  0.8   0.6  0.8


-
 
 1. There was set a single point of {-# SCC isReducibleAtTop #-}.
Why the report prints the two different lines for it?
What is the meaning of such duplication?

You removed the indentation, which was there to indicate the call
nesting.  A cost centre that appears more than once indicates that it
was called in two places in your program (or called with two different
lexical call stacks, to be precise).

 2. It prints six number columns instead of 5 explained in the
User Guide 5.00 (I am looking into now).
The second column looks new. What does it mean?

The first column is the new one.  It is just an integer identifier for
each cost centre stack.  I can't remember why we added it to the profile
(probably something to do with relating this output to heap profiles).

 3. Consider the line
concat in nextTermLevel  Prelude5  249 183  0.0 0.2  98.7 98.1
 
 This center was set in the following style:
   ...
   Just class_s - {-# SCC concat in nextTermLevel #-}
   concat
   [ofTop f | f - ops,  coarity f == s .. ]
   where
   ofTop f = ...
   ...
   nubCommut (t:ts) = {-# SCC nubCommut #-} ...
 
 Now, the line meaning is as follows.
 (a) This center was entered 249 times.
 (b) What does this mean  183 ?

entered 183 times, I think.

 (c) This very point itself takes almost zero time.
 (d) The computation tree below this point (caused by this call)
 takes  98.7%  of the total time.
 Right?

Yes

 The `nubCommut' call is the part of the tree below
 concat in nextTermLevel. And its line shows that it takes 90%
 by itself.
 So, the main expense is by  nubCommut,  and the rest of
 concat in nextTermLevel adds about 9% to it. These 9% include,
 in particular, the cost of the (++) calls.
 Right?

Yes, I think so (but you removed the indentation, so I can't tell for
sure).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: profiling usage

2005-01-26 Thread Simon Marlow
On 26 January 2005 09:13, Serge D. Mechveliani wrote:

 The indentation in the profiling report or in the source program?

In the profiling report.

 The report prints some function names not from the beginning of the
 line ...
 Maybe, you can give a simple example of a program with SCC when the
 center is printed twice? For I do not understand this point.

eg.

  f x = x * 2
  g x = f x
  main = print (f 3 + g 3)

here f is called both from main and from g, and will appear twice in the
profile.

 Now I replaced SCC with  -auto-all,  and it gives a report which
 looks satisfactory.
 The first part
lpo   TermComp  29.9   47.0
relates   Prelude1  13.11.5
...
 is helpful all right.
 But I also need to know how many times there was called a certain
 function  f,  no matter from where it is called and whether it
 occurs in the first part of the report. I search for `f' in
 the large second part of the report. And find the two (often many)
 lines (with different indentation):
 
   f   Module1  10   20  ...
   ...
 f   Module1  121  30  ...
 
 So, `f' was entered 20 times from one point and 30 times from another,
 and 50 times in total. Right?

Yes.

Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Process library and signals

2005-01-31 Thread Simon Marlow
On 27 October 2004 15:08, Glynn Clements wrote:

 Simon Marlow wrote:
 
 So basically you're saying that if runProcess is to be used in a
 system()-like way, that is the parent is going to wait synchronously
 for the child, then the parent should be ignoring SIGQUIT/SIGINT. 
 On the other hand, if runProcess is going to be used in a
 popen()-like way, then the parent should not be ignoring
 SIGQUIT/SIGINT. 
 
 Exactly.
 
 The current
 interface doesn't allow for controlling the behaviour in this way.
 
 Yep.
 
 So the current signal handling in runProcess is wrong, and should
 probably be removed.  What should we have instead?  We could
 implement the system()-like signal handling for System.Cmd.system
 only, perhaps. 
 
 Well, probably for system and rawSystem.

I've now fixed system and rawSystem to do something more appropriate on
POSIX systems: they now disable SIGINT and SIGQUIT in the parent, and
reset these signals to SIG_DFL in the child.  This isn't completely
correct, but it's better than before.

runProcess and friends don't do any signal handling.

I think this covers most of the useful situations.  If you want to do
the same thing in both parent and child, or handle in the parent and
SIG_DFL in the child: use runProcess.  If you want to ignore in the
parent and SIG_DFL in the child: use System.Cmd.{system,rawSystem}.  To
handle in the parent and ignore in the child: unfortunately not directly
supported.

I realise this doesn't address the library design issues you raised, but
as you pointed out there doesn't seem to be a good platform-independent
solution here.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: ghc HEAD 'make html' problems

2005-02-01 Thread Simon Marlow
On 01 February 2005 10:33, Ben Horsfall wrote:

 On 01 Feb 2005 10:32:14 +0100, Peter Simons [EMAIL PROTECTED] wrote:
 I can't build the library's Haddock documentation anymore:
 the process fails claiming that Control/Arrow-raw.hs would
 be missing. I've had this problem for a while now. Does
 anybody else see this?
 
 Yes, I do too. No solution, but observe that if you do this:
 
 cd libraries/base
 make Control/Arrow-raw.hs
 
 No Arrow-raw.hs file is created, although there is no error either.

This was due to a bug in GHC introduced a few days ago and fixed
yesterday.  Please try again.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RFC: hyperlinks in Haddock docs

2005-02-01 Thread Simon Marlow
On 01 February 2005 11:31, [EMAIL PROTECTED] wrote:

 On Tue, Feb 01, 2005 at 11:02:45AM -, Simon Marlow wrote:
 I'm making some changes to the way Haddock creates links, and I'd
 like to solicit comments on possible alternatives.
 
 The existing approach is for each non-locally-defined identifier in
 the current module, we hyperlink to a module that it was imported
 from, and that (a) actually documents the identifer, and (b) isn't
 hidden. 
 
 [...]
 
 So the new approach is to try to build up a global table of the
 best destinations to link to for each entity.  The question is how
 to determine best.  Here's my first stab:
 
   - A is better than B if A directly or indirectly imports B
 
 Perhaps it should be the other way round: the lowest non-hidden module
 that exports the name (if more than one such, fix on one).  This would
 need most of GHC.* hidden, which is desirable anyway.

Yes, maybe that's better, but it's not enough on its own.  For example,
GHC.Exts exports Int, but there's no relationship between Prelude and
GHC.Exts in the import hierarchy - so how do we determine which one is
better?  (GHC.Exts shouldn't be hidden, BTW.  The rest of GHC.* probably
should.)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: RFC: hyperlinks in Haddock docs

2005-02-01 Thread Simon Marlow
On 01 February 2005 12:48, Ketil Malde wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 
 There are some problems with the existing approach.  It doesn't cope
 well with instances: instances might refer to types/classes not below
 the current module in the hierarchy.  Also you might import an entity
 from a hidden module, but actually want to hyperlink to another
 module 
 
 Thoughts?  Better ideas?
 
 If it turns out to be difficult to determine where to link, one option
 could be to link to a separate table on the same page, listing the
 candidates?

Hmm, I'm not sure that would be practical.  Lots of entities are
exported from more than one place, especially Prelude entities.  You'd
have a table on almost every single page listing Bool, Maybe, Int, etc.

You could link to the index, I suppose - that lists all the places you
can get a particular entity.  I'm not sure you want to make two clicks
to follow a link every time though.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Waiting on Sockets or File Descriptors

2005-02-03 Thread Simon Marlow
On 02 February 2005 19:48, Peter Simons wrote:

 Wolfgang Thaller writes:
 
   a) poll() is not supported on Mac OS X and (at least some
   popular versions of) BSD.
 
 Are you certain? Just tried man poll on one of the MacOS X
 machines the SourceForge compile farm offers, and that one
 had it: Darwin ppc-osx1 5.5 Darwin Kernel Version 5.5.
 
 
   b) 'forkIO' in the threaded RTS would suffice in this
   case, as the poll() or select() system calls don't use
   any thread-local state. In the threaded RTS, safe
   foreign imports never affect other threads [...].
 
 That would be really good news! I assumed that GHC's runtime
 system used one thread for _all_ FFI invocations? (Unless
 you start new ones.) So I thought calling poll() would block
 all other FFI invocations until it returned?
 
 Or is that only for unsafe FFI calls?

When you compile your program with -threaded, safe FFI calls don't
block other threads, but unsafe calls still do.  Basically a safe
FFI call releases the lock on the RTS so other Haskell threads can
continue to run (and that at least partly explains why we have the
distinction: releasing and re-acquiring a lock is expensive).

 Do you have an URL for me where I can find out more about
 this, by any chance?

There's not much, but the -threaded option is documented here:

http://www.haskell.org/ghc/docs/latest/html/users_guide/options-phases.h
tml#OPTIONS-LINKER

and the Control.Concurrent documentation explains what bound threads
are.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: debugging memory allocations

2005-02-03 Thread Simon Marlow
On 03 February 2005 00:41, Duncan Coutts wrote:

 On Wed, 2005-02-02 at 13:30 -0700, Seth Kurtzberg wrote:
 Duncan Coutts wrote:
 In these cases we cannot turn on traditional profiling since that
 would interfere with the optimisations we are relying on to
 eliminate most of the other memory allocations. 
 
 I don't understand why you can't use profiling as a debugging tool. 
 How would profileing, ifor test purposes, cause other things to
 break? 
 
 The problem is that profiling add in extra parameters and extra code
 to each function (each SCC). This can interfere with optimisations
 like inlining and unboxing I believe. Simon could explain it better.

Yes.  Those pesky SCC annotations get in the way of optimisations.  It's
possible (likely even) that we could do a better job here, profiling is
long overdue for an overhaul.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: debugging memory allocations

2005-02-07 Thread Simon Marlow
On 02 February 2005 18:42, Duncan Coutts wrote:

 On Wed, 2005-02-02 at 17:01 +, Simon Marlow wrote:
 On 02 February 2005 13:38, Duncan Coutts wrote:
 Would looking at the core files help? What would I be looking for?
 
 Here's a simple version that I would expect to run in constance
 space. 
 
 pixbufSetGreen :: Pixbuf - IO ()
 pixbufSetGreen pixbuf = do
   ptr - pixbufGetPixels pixbuf
   sequence_
 [ do pokeByteOff ptr (y*384+3*x)   (0  ::Word8)
  pokeByteOff ptr (y*384+3*x+1) (128::Word8)
  pokeByteOff ptr (y*384+3*x+2) (96 ::Word8) | y -
 [0..127] , x - [0..127] ]
 
 
 Yes, let's see the core.  Since you're interested in allocation, you
 might be better off with -ddump-prep rather than -ddump-simpl: the
 former has all the allocation made into explicit 'let' expressions
 ready for code generation.
 
 Ok, attached it the -ddump-prep for the version using pixbufSetGreen,
 and another file for the longer more complicated one which is using
 setWierdColour. Both versions do contain 'let's.
 
 I've also attached the original code. (which you won't be able to
 build without hacking the gtk bits out of it)

I took a quick look at this, and one thing I noticed is that some
deforestation isn't happening.  There is still an explicit [0..127]
being constructed/deconstructed.

I don't think we'll be able to investigate this right now, so if you
need performance immediately I suggest you rewrite the code using
explicit recursion.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: readline fun

2005-02-07 Thread Simon Marlow
On 02 February 2005 15:51, Ian Lynagh wrote:

 The Debian ghc6 package curently has both a build-dependency and a
 normal dependency on libreadline4-dev. The former is so the readline
 library (and ghci) can be built, and the latter so compiling programs
 with the readline package behaves correctly.
 
 Now I want to move over to libreadline5-dev instead. Clearly, to build
 the new ghc6, I need the old ghc6 installed, and thus need
 libreadline4-dev installed (as it's a dependency). However,
 libreadline4-dev and libreadline5-dev can't be installed
 simultaneously. 
 
 I believe this isn't a /real/ problem because the readline package of
 the old GHC isn't actually needed to compile a new GHC, so
 libreadline4-dev isn't actually needed. Thus I can solve this problem
 by doing a build by hand on each arch. However, it would make my life
 a lot easier if things weren't so entangled in the first place.

I bet the old GHC will work fine with the new readline.  Can't you solve it 
that way?  (apologies if I'm just being naïve... I've never used a Debian 
system).

 While I could just split the readline package off into a separate
 ghc6-readline package for Debian, I fear this may cause confusion for
 users, and it would mean satisfying cabal package deps was not
 necessarily sufficient for Debian systems. So what would be really
 useful for me is if the split were done by ghc itself, in much the
 same way as how the opengl libraries can be split off. Then, in
 particular, cabal packages using readline would have to explicit
 state it rather than assuming it'll be there by default.

Not sure what you're asking for here.  The OpenGL libraries can't easily be 
split off.  You want us to ship the readline package separately, say as a Cabal 
package?  That's a possibility, but we like to keep the GHC sources as 
self-contained as possible, and adding another dependency just makes it harder 
to build GHC.  Sure, readline would be optional, but mostly you want it (why 
doesn't up-arrow work in the GHCi I just built?).

Ideally, we would have a home-grown readline replacement 
(System.Console.SimpleLineEditor doesn't count: it depends on your terminal 
using ANSI escape sequences).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: ghc-cvs-snapshot with wxHaskell

2005-02-08 Thread Simon Marlow
On 07 February 2005 19:28, Patrick Scheibe wrote:

 It seems that there are changes in the OpenGl library during the last
 month. So I decided to load a really young cvs version of the ghc
 (ghc-6.5.20050206-src.tar.bz2). The compilation works fine.
 
 My Problem is, that I also need the wxHaskell library. This
 compilation fails with the message:
 
 wxoutput
 ghc -c wxdirect/src/Map.hs -o out/wxdirect/Map.o -ohi
 out/wxdirect/Map.hi -odir out/wxdirect/  -package parsec
 -iout/wxdirect 
 sed: kann out/wxdirect/Map.d.in nicht lesen: Datei oder Verzeichnis
 nicht gefunden
 make: *** [out/wxdirect/Map.o] Fehler 2
 /wxoutput
 
 The third line is german and means sed: unable to read
 out/wxdirect/Map.d.in: File not found.
 This or an almost similar error appears when I use the cvs version of
 wxhaskell.
 
 I'm also in contact with Daan Leijen, the main developer of wxHaskell.
 He said:
 
 daan
 Darn, it seems that ghc changed its  options or something. My makefile
 generates dependency files (.d files) using ghc. Since sed can't
 find the file, it is either not generated or it is put at the wrong
 directory. Can you look around in your file system if there is a Map.d
 or Map.d.in file somewhere around (maybe Map.hs.d.in ?) I guess it
 is still in the source directory instead of the out/... directory.

Could you tell us what command line was supposed to generate the
Map.d.in or Map.d file?  It may be a bug, there were a few changes in
the driver recently.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC as a package

2005-02-09 Thread Simon Marlow
On 08 February 2005 12:43, Lennart Kolmodin wrote:

 I'm working on an IDE for Haskell, written in Haskell.
 Currently, I'm looking for a way to parse .hs-files for a module
 browser and I recall that Simon Marlow was going to release GHC as a
 package soon.
 We could also use that package to compile source code without
 invoking ghc as a separate process.
 
 What is the status of the package and where can I get it?
 I can't find it in the CVS or in any of the snapshots.

You can currently compile GHC as a package, but the part that is missing
is a well-thought-out API to access the facilities of GHC.  We'd like to
do this, and indeed it will probably emerge as part of the work we're
doing on a Visual Studio plug-in, but currently other things have higher
priority.  We plan to get back to work on Visual Studio during March.

To compile GHC as a package, get a recent GHC source tree and set
'BuildPackageGHC=YES' in your mk/build.mk.  You should also set
$(GHC_PKG) to point to your ghc-pkg command.  Then build ghc as normal,
and in ghc/compiler say 'make install-inplace-pkg' to register the
package (this won't do any actual installation, just register the
package with your installed GHC).

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Download stats

2005-02-09 Thread Simon Marlow
Hi folks,

For the forthcoming 6.4 release, we'd like to get a rough idea of
download statistics, at least from haskell.org.  Both Simon  I are too
busy/lazy (delete as applicable) to do this ourselves, and we don't know
the best tools to use (grep|wc on the access_log is a bit too crude - we
want to exclude things like partial downloads except when the download
was completed later, etc.).  Of course, if we can get or estimate
download statistics for other sources of GHC too, that would be great.

Any volunteers?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Download stats

2005-02-10 Thread Simon Marlow
On 09 February 2005 20:03, Tomasz Zielonka wrote:

 On Wed, Feb 09, 2005 at 01:11:34PM -, Simon Marlow wrote:
 Hi folks,
 
 For the forthcoming 6.4 release, we'd like to get a rough idea of
 download statistics, at least from haskell.org.  Both Simon  I are
 too busy/lazy (delete as applicable) to do this ourselves, and we
 don't know the best tools to use (grep|wc on the access_log is a bit
 too crude - we want to exclude things like partial downloads except
 when the download was completed later, etc.).  Of course, if we can
 get or estimate download statistics for other sources of GHC too,
 that would be great. 
 
 Any volunteers?
 
 Why not ask GHC users to fill some web form with additional
 information, like how they use GHC, where they use it, for what, etc?
 
 I can make a suitable web form in WASH, if you want.

That would be nice.  There's no way I'm putting a web form in the way of
the download link, but having it a separate survey on the site would
perhaps be useful.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


GHC 6.4 release candidates available

2005-02-10 Thread Simon Marlow
We are finally at the release candidate stage for GHC 6.4.  Snapshots
with versions 6.4.20050209 and later should be considered release
candidates for 6.4.

Source and Linux binary distributions are avaiable here:

  http://www.haskell.org/ghc/dist/stable/dist/

Please test if you're able to, and give us feedback.

Thanks!

Simons  the GHC team
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: ghc sensitive to struct layout

2005-02-10 Thread Simon Marlow
On 09 February 2005 08:53, Axel Simon wrote:

 gcc uses a different convention from the Microsoft environment when it
 comes to laying out fields in C structs, in particular bit fields.
 Can I pass the -optc--mms-bitfields to ghc when it compiles via C
 without negative effect?
 
 This flag is not implicit at the moment which I assume means that ghc
 itself is not compiled with --mms-bitfields on Windows. ghc surely
 includes struct declarations when it compiles the generated C files,
 so the question is whether all its structs are laid out the same
 regardless of which layout option one chooses.

The answer is I don't know, but we could probably make it true if
necessary.   

In GHC 6.4, the generated HC code doesn't use any structs, all the field
offests are precomputed, which will no doubt make your life easier.

One thing you could do (in 6.4) is to compile
ghc/include/mkDerivedConstants.c with and without -mms-bitfields and see
if it generates the same output.  If it does, then we have some
confidence that -mms-bitfields won't cause any grief.  If it doesn't,
then we have some clue about what needs fixing.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.4 release candidates available

2005-02-10 Thread Simon Marlow
On 10 February 2005 13:31, Malcolm Wallace wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 
 We are finally at the release candidate stage for GHC 6.4.  Snapshots
 with versions 6.4.20050209 and later should be considered release
 candidates for 6.4.
 
 Using: ghc-6.4.20050209-i386-unknown-linux.tar.bz2
 
 $ cat hello.hs
 main = putStrLn hello world
 $ ghc--6.4.20050209 -o hello hello.hs
 ld: cannot find -lHSbase_cbits
 collect2: ld returned 1 exit status
 $
 
 Pretty much a show-stopper.

Yes, I'm fixing this right now.  Please hold off downloading until I can
get a fixed distribution up...

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.4 release candidates available

2005-02-10 Thread Simon Marlow
On 10 February 2005 13:40, Simon Marlow wrote:

 On 10 February 2005 13:31, Malcolm Wallace wrote:
 
 Simon Marlow [EMAIL PROTECTED] writes:
 
 We are finally at the release candidate stage for GHC 6.4. 
 Snapshots with versions 6.4.20050209 and later should be considered
 release candidates for 6.4.
 
 Using: ghc-6.4.20050209-i386-unknown-linux.tar.bz2
 
 $ cat hello.hs
 main = putStrLn hello world
 $ ghc--6.4.20050209 -o hello hello.hs
 ld: cannot find -lHSbase_cbits
 collect2: ld returned 1 exit status
 $
 
 Pretty much a show-stopper.
 
 Yes, I'm fixing this right now.  Please hold off downloading until I
 can get a fixed distribution up...

New distributions are up now.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Unregistering a package

2005-02-10 Thread Simon Marlow
On 09 February 2005 13:32, Peter Simons wrote:

 I have an interesting problem. There are two versions of the
 HsDNS package installed right now:
 
  $ ghc-pkg list
  | /usr/local/ghc-current/lib/ghc-6.5/package.conf:
  | rts-1.0, [...]  (hsdns-2005-02-04),
  | hsdns-2005-02-08
 
 Now how can I unregister them? I have tried everything I
 could think of, but no luck:
 
  $ ghc-pkg unregister hsdns
  | ghc-pkg: package hsdns matches multiple packages:
  |   hsdns-2005-02-04, hsdns-2005-02-08
 
  $ ghc-pkg unregister hsdns-2005-02-08
  | ghc-pkg: cannot parse 'hsdns-2005-02-08' as a package identifier
 
 Can someone give me a pointer how to remedy this situation?

The problem is not really that hsdns-2005-02-08 isn't a legal package
identifier, actually it is an ambiguous package identifier.  I suggest
you remove that package by hand from the package.conf file, and instead
use hsdns-2005.02.08.

The general syntax of package ids is:

   pkgid ::= pkg ('-' version)?
   pkg ::= (alphanum|'-')+
   version ::= (digit+) ('.' digit+)* ('-' alphanum+)*

This syntax means that package ids which have numeric component(s) at
the end and no '.' will be ambiguous.

I've added a test to ghc-pkg to prevent you from registering a package
with a problematic package id for now.  Perhaps we should change the
syntax of package ids though?

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.4 release candidates available

2005-02-10 Thread Simon Marlow
On 10 February 2005 15:13, Malcolm Wallace wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 
 We are finally at the release candidate stage for GHC 6.4.
 Please test if you're able to, and give us feedback.
 
 In versions 5.00 = ghc = 6.2.2, the result of
 
 ghc -v 21 | head -2
 
 was something like
 
 Glasgow Haskell Compiler, Version 6.2.2, 
 Using package config file: /grp/haskell/lib/ghc-6.2.2/package.conf
 
 whereas with 6.4, these two lines have been swapped:
 
 Reading package config file:
 /usr/malcolm/local/lib/ghc-6.4.20050209/package.conf Glasgow
 Haskell Compiler, Version 6.4.20050209,  
 
 and the Using package config message has become Reading package
 config.  These changes are minor and unnecessary: in particular they
 make the detection of configuration information (by hmake) rather
 more complicated than it ought to be.  I know this is a pretty trivial
 complaint, but the -v behaviour has been stable for a few years now,
 so why change it arbitrarily?

Ok, fixed.  The right way to get the location of the package.conf file
is to ask ghc-pkg, BTW.  In fact, the right way is not to know the
location of package.conf at all, but to use ghc-pkg to query its
contents.  The contents of package.conf is proprietary :-)

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.4 release candidates available

2005-02-10 Thread Simon Marlow
On 10 February 2005 15:36, Malcolm Wallace wrote:

 Simon Marlow [EMAIL PROTECTED] writes:
 
 Ok, fixed.  The right way to get the location of the package.conf
 file is to ask ghc-pkg, BTW.  In fact, the right way is not to know
 the location of package.conf at all, but to use ghc-pkg to query its
 contents.  The contents of package.conf is proprietary :-)
 
 And indeed hmake does use ghc-pkg, when it needs to find import
 directories etc.  But the thing is, previous versions of ghc-pkg
 reported such directories as e.g. $libdir/base, and how do you find
 out what $libdir refers to...?

ghc --print-libdir

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: GHC 6.4 release candidates available

2005-02-11 Thread Simon Marlow
On 10 February 2005 17:07, Malcolm Wallace wrote:

 $ ghc-pkg-6.4.20050209 --show-package=base --field=import_dirs
 [/usr/malcolm/local/lib/ghc-6.4.20050209/imports]
 
 yet
 
 $ ghc-pkg-6.4.20050209 --show-package=base-1.0 --field=import_dirs
 ghc-pkg: cannot find package base-1.0
 
 $ ghc-pkg-6.4.20050209 --list-packages
 /usr/malcolm/local/lib/ghc-6.4.20050209/package.conf:
 rts-1.0, base-1.0, haskell98-1.0, template-haskell-1.0, ...

Fixed, thanks.

BTW, we recommend you migrate to using the new command-line syntax for
ghc-pkg at some point.

Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


<    3   4   5   6   7   8   9   10   11   12   >