Re: Maps, was Re: GHC source code improvement ideas

2008-01-09 Thread Adrian Hey

Wolfgang Jeltsch wrote:

Am Sonntag, 6. Januar 2008 13:37 schrieb Adrian Hey:

It's the GT class here..


Short remark: Wouldn’t a longer, more descriptive identifier be better?


Like "GeeTee" maybe? or even "GeneralisedTrie"?

I like short names myself. But as I have stopped work on this particular
lib, it needs a new owner. Anyone who takes it on has renaming
priveleges :-)

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Maps, was Re: GHC source code improvement ideas

2008-01-06 Thread Adrian Hey

Christian Maeder wrote:

Simon Marlow wrote:

Regarding Data.Map, I'd be interested in trying out AVL trees instead,
to see if they have better performance, but then we'd have to import the
code of course.


Surely, we want the best maps possible for ghc and as public library
(and minimize maintenance).

The problem is to agree on a suitable interface. I would suggest to take
 (or only slightly change) Daan's interface (the current Data.Map) and
improve the underlying implementation possibly using (i.e. Adrian Hey's)
AVL trees.


The trouble is as far as implementations is concerned the best maps
possible is a continually moving target I suspect, not to mention
being highly dependent on key type. I certainly wouldn't say AVL tree
based Maps are best possible, though they do seem give better
performance (particularly for union, intersection etc..). The clone
also address some defects in the current Data.Map (like lack of
strictness control) and has some other useful stuff.

But if this is going to be used at all, I would say this is only
a stop gap solution, which leads me to your second point about
interfaces. The generalised trie lib I was working on was a serious
attempt to see what a real useable non-toy map API should look
like. It's the GT class here..
http://code.haskell.org/collections/collections-ghc6.8/Data-Trie-General/Data/Trie/General/Types.hs
(Sorry for the long URL).

It's already somewhat huge and still incomplete IMO, but even in it's
current form it gives a uniform API for Ints, arbitrary Ord instances
and Lists. It's a shame it's all completely untested :-(

What really needs to happen on this is..
 1 - "Finish" and stabilise the GT class definition. There's still more
 that's needed but I think the promised type families magic is needed
 first.
 2 - Write a comprehensive test/benchmarking suite for GT instances.
 3 - Provide some way to automatically generate the instances for
 arbitrary user defined types.

Which is all rather a lot of work that nobody seems very interested
in :-(

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Is GHC.Base deliberately hidden?

2008-01-03 Thread Adrian Hey

Hello,

Why is it that the haddock docs supplied with GHC omit this module and
its exports? Is it because we're not supposed to use them? I'm thinking
of the compareInt# function in particular, which I use quite a lot.

Thanks
--
Adrian Hey


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Orphan Instances with GHC 6.8

2007-10-17 Thread Adrian Hey

Hello Simon

Simon Peyton-Jones wrote:

What you say rings no bells.  I don't see how to make progress without more 
info.
Perhaps show the module that elicits the unexpected orphan message?


The problem I'm having is with collections-package..
 http://darcs.haskell.org/packages/collections-ghc6.6/

but, it's complicated by the fact that..

1- The code I'm working on is my local repository (which amongst
   other things) has some changes in it to get it to compile
   with 6.8 (experimentally), so is not quite the same.
2- I can't get exactly the same code to compile with the different
   ghc versions.
3- Every experiment I try takes a long time because the collections
   package takes a long time to build.
4- I'm a bit busy today :-)

So I'm afraid I can't provide a small demo of the problem because
I'm having a hard time reproducing it consistently. But I'll
try to get to produce some kind of reproducible (but not necessarily
small :-) test case soon. Or else just report this as a false alarm
caused by me doing something silly :-)

So maybe for the time being you should forget I mentioned this as
I don't want to waste anybody's time.

Thanks
--
Adrian Hey


Simon

| -Original Message-
| From: Adrian Hey [mailto:[EMAIL PROTECTED]
| Sent: 17 October 2007 17:04
| To: glasgow-haskell-users@haskell.org; Simon Peyton-Jones
| Subject: Re: Orphan Instances with GHC 6.8
|
| Hello again,
|
| Adrian Hey wrote:
| > Hello Folks,
| >
| > One thing different from 6.6 and 6.8 is I find that with -Wall
| > I get a lot more warnings about orphan instances, even if the
| > code has not changed.
| >
| > Has the definition of what an Orphan instance is changed?
| >
| > Or is this a bug? If so, is the bug in 6.6 or 6.8?
| >
| > I can make them go away with the -fno-warn-orphans flag,
| > but I'm still curious as to why I didn't have to do this before.
|
| Sorry, this situation seems more complex than this. I can't quite
| figure out whats going on, in fact this but it may to be related
| to some other change I've made in the code to get since 6.8.
|
| The strange thing (to me) is that if I take exactly the same module
| and use it in 2 different packages (which are compiled with the same
| ghc options AFAIK), one gives me the orphan warning and one doesn't.
| Does that make sense? (I.E. Is "Orphanness" somehow context dependent?)
|
| Thanks
| --
| Adrian Hey





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Orphan Instances with GHC 6.8

2007-10-17 Thread Adrian Hey

Hello again,

Adrian Hey wrote:

Hello Folks,

One thing different from 6.6 and 6.8 is I find that with -Wall
I get a lot more warnings about orphan instances, even if the
code has not changed.

Has the definition of what an Orphan instance is changed?

Or is this a bug? If so, is the bug in 6.6 or 6.8?

I can make them go away with the -fno-warn-orphans flag,
but I'm still curious as to why I didn't have to do this before.


Sorry, this situation seems more complex than this. I can't quite
figure out whats going on, in fact this but it may to be related
to some other change I've made in the code to get since 6.8.

The strange thing (to me) is that if I take exactly the same module
and use it in 2 different packages (which are compiled with the same
ghc options AFAIK), one gives me the orphan warning and one doesn't.
Does that make sense? (I.E. Is "Orphanness" somehow context dependent?)

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Orphan Instances with GHC 6.8

2007-10-17 Thread Adrian Hey

Hello Folks,

One thing different from 6.6 and 6.8 is I find that with -Wall
I get a lot more warnings about orphan instances, even if the
code has not changed.

Has the definition of what an Orphan instance is changed?

Or is this a bug? If so, is the bug in 6.6 or 6.8?

I can make them go away with the -fno-warn-orphans flag,
but I'm still curious as to why I didn't have to do this before.

Thanks
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.8.1 Release Candidate

2007-09-28 Thread Adrian Hey

Ian Lynagh wrote:

Hi Adrian,

On Fri, Sep 14, 2007 at 07:50:47AM +0100, Adrian Hey wrote:

[29 of 53] Compiling Data.Tree.AVL.Join (
Data.Tree.AVL/Data/Tree/AVL/Join.hs, dist\build/Data/Tree/AVL/Join.o )
ghc.exe: panic! (the 'impossible' happened)
  (GHC version 6.8.20070912 for i386-unknown-mingw32):
cgPanic
a{v sMX} [lid]
static binds for:
collections-0.3:Data.Tree.AVL.Join.poly_go{v rse} [gid]
collections-0.3:Data.Tree.AVL.Join.poly_$wlgo{v rsf} [gid]
collections-0.3:Data.Tree.AVL.Join.flatConcat{v ryL} [gid]
local binds for:
SRT label collections-0.3:Data.Tree.AVL.Join.$LrMelvl{v rMe}_srt

Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


I can't reproduce this with the i386/Linux 6.8.0.20070925 and
./Setup configure; ./Setup build:


Sorry, it's already been fixed by SPJ..

 http://hackage.haskell.org/trac/ghc/ticket/1718

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.8.1 Release Candidate

2007-09-19 Thread Adrian Hey

Adrian Hey wrote:

I get this error..

[29 of 53] Compiling Data.Tree.AVL.Join (
Data.Tree.AVL/Data/Tree/AVL/Join.hs, dist\build/Data/Tree/AVL/Join.o )
ghc.exe: panic! (the 'impossible' happened)
  (GHC version 6.8.20070912 for i386-unknown-mingw32):
cgPanic
a{v sMX} [lid]
static binds for:
collections-0.3:Data.Tree.AVL.Join.poly_go{v rse} [gid]
collections-0.3:Data.Tree.AVL.Join.poly_$wlgo{v rsf} [gid]
collections-0.3:Data.Tree.AVL.Join.flatConcat{v ryL} [gid]
local binds for:
SRT label collections-0.3:Data.Tree.AVL.Join.$LrMelvl{v rMe}_srt

Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


I still get this error with 6.8.0.20070917 release, and the haddock
problem with windows still seems to be there. Also, there doesn't
seem to be a runHaskell command included (not sure if that's
intentional).

BTW, should I be reporting bugs with the release candidate on this
list or via the usual trac?

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.8.1 Release Candidate

2007-09-14 Thread Adrian Hey

Adrian Hey wrote:

OK. Meanwhile I've tried compiling setup.hs and building the
6.6 version of the collections package..


Also, there still seems to be a problem with building Haddock
docs in windows. Although Haddock now seems to find the
relevant base package haddock, it still doesn't link to it
properly because the links are all missing the "file://"
prefix. But I'm not sure if this is haddock-0.8 problem,
a Cabal problem or a ghc distro problem.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.8.1 Release Candidate

2007-09-13 Thread Adrian Hey

Ian Lynagh wrote:

This is a symptom of the stage2 build failure, meaning the installer is
using the stage 1 compiler which can't do bytecode compilation.

Therefore this should also be fixed in the next snapshot.


OK. Meanwhile I've tried compiling setup.hs and building the
6.6 version of the collections package..

 http://darcs.haskell.org/packages/collections-ghc6.6/

I get this error..

[29 of 53] Compiling Data.Tree.AVL.Join (
Data.Tree.AVL/Data/Tree/AVL/Join.hs, dist\build/Data/Tree/AVL/Join.o )
ghc.exe: panic! (the 'impossible' happened)
  (GHC version 6.8.20070912 for i386-unknown-mingw32):
cgPanic
a{v sMX} [lid]
static binds for:
collections-0.3:Data.Tree.AVL.Join.poly_go{v rse} [gid]
collections-0.3:Data.Tree.AVL.Join.poly_$wlgo{v rsf} [gid]
collections-0.3:Data.Tree.AVL.Join.flatConcat{v ryL} [gid]
local binds for:
SRT label collections-0.3:Data.Tree.AVL.Join.$LrMelvl{v rMe}_srt

Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug

BTW, if you want to reproduce this the cabal files needs this line..
build-depends:  base >= 2.0, QuickCheck, bytestring >= 0.9, containers

= 0.1, array >= 0.1


Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 6.8.1 Release Candidate

2007-09-13 Thread Adrian Hey

Ian Lynagh wrote:

Please test as much as possible; bugs are much cheaper if we find them
before the release!


With the windows build ghc-6.8.20070912-i386-unknown-mingw32.exe, when
I try something like..

 runghc Setup.hs build

I get this error message..

 ghc.exe: not built for interactive use

Regards
--
Adrian Hey




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 6.6 panics when compiling HSet from collections package

2007-06-13 Thread Adrian Hey

Daniel McAllansmith wrote:

Hi.

I just got the collections package from 
http://darcs.haskell.org/packages/collections-ghc6.6


When trying to build it with GHC 6.6 on an amd64 linux machine using Cabal I 
got the following:


[12 of 57] Compiling Data.Tree.AVL.IntMap.Internals.HSet ( 
Data.Tree.AVL.IntMap/Data/Tree/AVL/IntMap/Internals/HSet.hs, 
dist/build/Data/Tree/AVL/IntMap/Internals/HSet.o )

ghc-6.6: panic! (the 'impossible' happened)
  (GHC version 6.6 for x86_64-unknown-linux):
cgPanic
tpl{v s2zg} [lid]
static binds for:
collections-0.3:Data.Tree.AVL.IntMap.Internals.HSet.intersectionMaybeH{v 
rji} [gid]
collections-0.3:Data.Tree.AVL.IntMap.Internals.HSet.$Lr2jfforkL{v r2jf} 
[gid]
collections-0.3:Data.Tree.AVL.IntMap.Internals.HSet.$Lr2jhforkR{v r2jh} 
[gid]

local binds for:
SRT label 
collections-0.3:Data.Tree.AVL.IntMap.Internals.HSet.intersectionWithH'{v 
rjg}_srt




Anyone seen this before?  A real GHC bug, a problem with my GHC installation, 
or a problem with the collections package?


It's a known bug in in ghc 6.6, you need to upgrade to ghc 6.6.1.

BTW, beware of using some of the stuff that I've written for this :-)

The Data.Tree.AVL part (including Data.Map.AVL and Data.Set.AVL)
should be fairly safe as it's been heavily tested.

But the Data.Trie.General part is still under active development,
volatile, unfinished and completely untested.

Also, don't use the Data.Tree.AVL.IntMap stuff either if you can
avoid it. I believe it works fine, but I've decided it would be
best to obsolete this and subsume it within Data.Trie.General
as Data.Trie.General.IntGT

Regards
--
Adrian Hey



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Why do we have stack overflows?

2007-05-08 Thread Adrian Hey

Simon Marlow wrote:
I'm more than happy to change the defaults, if there's some agreement on 
what the defaults should be.  The current choice is somewhat historical 
- we used to have a bound on both heap size and stack size, but the heap 
size bound was removed because we felt that on balance it made life 
easier for more people, at the expense of a bit more pain when you write 
a leaky program.


Well in the light of what Stefan says about exponentially increasing
stack size I'm not sure increasing (or removing) the default is the
answer. Going from 16M to 32M to 64M stacks is bit drastic. It seems
to me going up in sane sized linear increments would be better.

But since we also want to avoid frequent copying of an already oversized
stack I guess some linked list representation is what's needed. In fact
I'd think what Stefan suggests or something very similar would be the
way to go. But I have no idea how much work that would be.

But to give programs best possible chance of running successfully then I
think an (optional) overall limit on total memory use would be
preferable (better than trying to guess how much stack space will be
needed in advance).

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 6.6.1 Windows installer, test version

2007-05-07 Thread Adrian Hey

Neil Mitchell wrote:

Hi,

I've prepared a GHC 6.6.1 Windows installer. Before I offer this up as
an official installer, could people please test it?

http://www.haskell.org/ghc/dist/6.6.1/ghc-6.6.1-i386-windows-test1.exe



Thanks for that. It seems to install and compile the collections
package OK so I guess it's working.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


6.6.1 for windows

2007-05-06 Thread Adrian Hey

Hello,

Can we expect a 6.6.1 binary and/or installer for windows sometime?

Thanks
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cost of Overloading vs. HOFs

2007-05-04 Thread Adrian Hey

Duncan Coutts wrote:

One might hope that in this case we could hoist the extraction of the
dictionary members outside the inner loop.


This possibility had crossed my mind too. If HOFs really are faster
(for whatever reason) then it should be possible for a compiler to
do this automatically.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cost of Overloading vs. HOFs

2007-05-04 Thread Adrian Hey

Neil Mitchell wrote:

Hi Adrian


The GHC users guide says overloading "is death to performance if
left to linger in an inner loop" and one thing I noticed while
playing about with the AVL lib was that using a HOF and passing
the (overloaded) compare function as an explicit argument at the
start seemed to give noticable a performance boost (compared with
dictionary passing presumably).

I'm not sure why that should be, but has anyone else noticed this?


A HOF is one box, the Ord dictionary is an 8-box tuple containing HOF
boxes. When you invoke compare out of Ord, you are taking something
out of the tuple, whereas when you use the HOF its directly there.


Well I can understand why overloading might be slow compared to
a direct call (presumably this is what you get from specialisation).

But I wouldn't have thought this additional indirection cost of
method lookup was very significant compared with the HOF approach
(at least not after everything was in cache). IOW I would have
expected HOFs to just about as deathly to performance as
(unspecialised) overloading, but it seems this isn't the case.


This is also the reason you get a performance decrease moving from a
1-element class to a 2-element class.


Why is that? Does ghc pass just the single method rather than a one
entry dictionary in such cases?

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Cost of Overloading vs. HOFs

2007-05-04 Thread Adrian Hey

Hello,

The GHC users guide says overloading "is death to performance if
left to linger in an inner loop" and one thing I noticed while
playing about with the AVL lib was that using a HOF and passing
the (overloaded) compare function as an explicit argument at the
start seemed to give noticable a performance boost (compared with
dictionary passing presumably).

I'm not sure why that should be, but has anyone else noticed this?

If so, maybe this advice should be added to the user guide, especially
if your function repeatedly uses just one method from a class?

(or maybe not if it's nonsense :-)

I guess having done this there would be little to gain by using the
SPECIALIZE pragma though (unless ghc also specialises HOFs).

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Why do we have stack overflows?

2007-05-04 Thread Adrian Hey

Malcolm Wallace wrote:

Just FYI, nhc98 has a single memory area in which the stack and heap
grow towards each other.  Memory exhaustion only happens when the stack
and heap meet in the middle and GC fails to reclaim any space.

However, it can only do this because it is single-threaded.  As soon as
you need a separate stack for each thread in a multi-threaded system,
this nice one-dimensional model breaks down.


Yes. A while ago I did the same thing with a toy FPL I was tinkering
with. But like nhc98, it was single threaded.

But I don't believe this is a serious extra complication. ghc does
seem to have the capability to grow stacks effectively without
bound (and presumably shrink them too), but it doesn't do this by
default for reasons I don't understand.

My preference would be to have an upper limit on total (stack+heap)
memory used. Also, as Stefan has suggested, I think stack should
grow linearly, not exponentially. But I don't really know enough
about the innards of ghc rts to know what might or might not be
possible/easy/difficult.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Why do we have stack overflows?

2007-05-04 Thread Adrian Hey

John Meacham wrote:

I believe it is because a stack cannot be garbage collected, and must be
traversed as roots for every garbage collection. I don't think there are
any issues with a huge stack per se, but it does not play nice with
garbage collection so may hurt your performance and memory usage in
unforeseen ways.


I'm still not convinced :-(

I also don't believe it's in anybodys interest to have programs
failing for no good reason. A good reason to fail is if overall
memory demands are getting stupid. Failing because the stack has
grown beyond some arbitrary (and typically small) size seems
bad to me.

I know that this is to a certain extent this is controllable
using RTS options, but this is no use to me as a library
writer tying to chose between stackGobbler and heapGobbler.

The stuff should "just work" and not be dependent on the right
RTS incantations being used when the final program is run.

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Why do we have stack overflows?

2007-05-03 Thread Adrian Hey

Duncan Coutts wrote:

On Thu, 2007-05-03 at 16:24 +0100, Adrian Hey wrote:

Hello Folks,

Just wondering about this. Please understand I'm not asking why
programs use a lot of stack sometimes, but specifically why is
using a lot of stack (vs. using a lot of heap) generally regarded
as "bad". Or at least it seems that way given that ghc run time
makes distinction between the two and sets separate
limits for them (default max stack size being relatively small
whereas default max heap size in unlimited). So programs can
fail with a stack overflow despite having bucket loads of heap
available?


Perhaps it's there to help people who write simple non-terminating
recursion. They'll get an error message fairly soon rather than using
all memory on the machine and invoking the wrath of the OOM killer.


Hmm, I still don't see why a "stack leak" should be treated differently
from "heap leak". They'll both kill your program in the end.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Why do we have stack overflows?

2007-05-03 Thread Adrian Hey

Hello Folks,

Just wondering about this. Please understand I'm not asking why
programs use a lot of stack sometimes, but specifically why is
using a lot of stack (vs. using a lot of heap) generally regarded
as "bad". Or at least it seems that way given that ghc run time
makes distinction between the two and sets separate
limits for them (default max stack size being relatively small
whereas default max heap size in unlimited). So programs can
fail with a stack overflow despite having bucket loads of heap
available?

Frankly I don't care if my program fails because it's used
a lot of stack or a lot of heap. I would rather set some
common memory budget and have them fail if that budget was
exceeded.

This policy seems to have unfortunate consequences. Sometimes
you end up re-writing stuff in a manner that just trades stack
use for heap use (I.E. doesn't do anything to reduce overall
memory consumption). Given the cost of reclaiming heap
is rather high (compared to stack), this seems like bad idea
(the version that used a lot of stack would be better IMO
if only it didn't risk stack overflow).

Example..

-- Strict version of take
stackGobbler :: Int -> [x] -> [x]
stackGobbler 0 _  = []
stackGobbler _ [] = []
stackGobbler n (x:xs) = let xs' = stackGobbler (n-1) xs
in  xs' `seq` (x:xs')

-- Another strict version of take
heapGobbler :: Int -> [x] -> [x]
heapGobbler = heapGobbler' []
 where heapGobbler' rxs 0 _  = reverse rxs
   heapGobbler' rxs _ [] = reverse rxs
   heapGobbler' rxs n (x:xs) = heapGobbler' (x:rxs) (n-1) xs

But I guess everyone here is already aware of this, hence the question
(current ghc memory system design seems a bit odd, but maybe there's
a good reason why the rts can't work the way I would like).

Thanks
--
Adrian Hey


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: strict bits of datatypes

2007-03-21 Thread Adrian Hey

Excuse me for moving this discussion over to ghc users.

(On Haskell prime) John Meacham wrote:

Although I have not looked into this much, My guess is it is an issue in
the simplifier, normally when something is examined with a case
statement, the simplification context sets its status to 'NoneOf []',
which means we know it is in WHNF, but we don't have any more info about
it. I would think that the solution would be to add the same annotation
in the simplifier to variables bound by pattern matching on strict data
types?

Just a theory. I am not sure how to debug this in ghc without digging
into it's code.


Well the latest on this is I've sent Simon some code which illustrates
the problem, but it seems not quite as simple as I first speculated, in
that a simple function that just overwrites tree elements takes exactly
the same time whether I rely on strictness annotations or explicit
seqs (something I would not expect if my original speculation was
correct).

But a more complex function that does insertions into the tree takes
about 15% longer using strictness annotations than it does with
explicit seqs. The object file seems quite a bit larger too.

BTW, I suspect one (perhaps the only) reason for the apparent jump
from 5% longer (as stated in my earlier post) to 15% is that I
modified the test so less time would be spent on garbage collection,
thereby amplifying the apparent difference in speeds.

I can post the relevant code to anyone who's interested and thinks
they might be able to explain this. I guess the next step would be
for someone who understands core to take a look at it. I'm afraid
I find core incomprehensible :-(

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Coverage Condition fails

2007-01-02 Thread Adrian Hey

Simon Peyton-Jones wrote:

The coverage condition is described in the paper
http://research.microsoft.com/~simonpj/papers/fd-chr

Use -fallow-undecidable-instances to make it compile.


Thanks, it seems to compile now. I've had a quick look at
the paper you mentioned and I also see the latest ghc user
guide has something to say about this too. But I'm still
not really clear about what the problem is :-(

(But I haven't really had time to study the paper properly
yet either.)


|  > -- Generalsed Trie map class
|  > class Ord k => GT map k | map -> k , k -> map where
|
|  > -- Map type for pairs
|  > newtype (GT2 map1 map2 a) = GT2 (map1 (map2 a))
|
|  > -- Which is an instance of GT
|  > instance (GT map1 k1, GT map2 k2) => GT (GT2 map1 map2) (k1,k2) where


Intuitively, this looks quite unambigous to me and allowing undecidable
anything worries me a bit. It makes me think my code is inherently
flakey somehow (not sure how right now though :-).

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Coverage Condition fails

2006-12-30 Thread Adrian Hey

Hello folks,

Could somebody explain what a coverage condition is and
what this error message means..

"Illegal instance declaration for 'GT (GT2 map1 map2) (k1,k2)'
  (the Coverage Condition fails for one of the functional dependencies)
 In the instance declaration for 'GT (GT2 map1 map2) (k1,k2)'"

The offending code looks like this..

> -- Generalsed Trie map class
> class Ord k => GT map k | map -> k , k -> map where

> -- Map type for pairs
> newtype (GT2 map1 map2 a) = GT2 (map1 (map2 a))

> -- Which is an instance of GT
> instance (GT map1 k1, GT map2 k2) => GT (GT2 map1 map2) (k1,k2) where

This is with ghc 6.6. The strange thing is this code used to compile
fine with earlier versions (but I don't know if it actually worked
because I never tested it).

Thanks
--
Adrian Hey




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Ticket 1031 workaround?

2006-12-30 Thread Adrian Hey

Hello folks,

I seem to have hit this problem..

 http://hackage.haskell.org/trac/ghc/ticket/1031

(I think, at least I'm getting a very similar incomprehensible
error message :-)

I tried using bang patterns instead of `seq` like this:

> let a0_ = f a0 a in a0_ `seq` (# l,hl,a0_,r,hr #)

becomes ..

> let !a0_ = f a0 a in (# l,hl,a0_,r,hr #)

but I still get the same error.

Could someone who knows explain exactly what the problem
is and what the workaround is (if there is one)?

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: dependingOn in 6.6

2006-08-11 Thread Adrian Hey

John Meacham wrote:

data Dummy1 = Dummy1
intVar1 :: IORef Int
intVar1 = unsafePerformIO (newIORef 0 `dependingOn` Dummy1)



data Dummy2 = Dummy2
intVar2 :: IORef Int
intVar2 = unsafePerformIO (newIORef 0 `dependingOn` Dummy2)


normally, you would have to compile with -fno-cse to keep these two
variables from being turned into one. however, since Dummy1 and Dummy2
will never be the same, the two terms cannot be considered the same by
he optimizer so there is no problem.


I would like to see this in GHC. It's still an ugly hack but at least
it's a better option than remembering to compile with -fno-cse IMO.
I guess you still need the NOINLINE pragma too though.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Profiling and Data.HashTable

2005-10-14 Thread Adrian Hey
On Friday 14 Oct 2005 3:17 pm, Ketil Malde wrote:
> Hi all,
>
> I have a program that uses hash tables to store word counts.  It can
> use few, large hash tables, or many small ones.  The problem is that
> it uses an inordinate amount of time in the latter case, and
> profiling/-sstderr shows it is GC that is causing it (accounting for
> up to 99% of the time(!))
>
> Is there any reason to expect this behavior?
>
> Heap profiling shows that each hash table seems to incur a memory
> overhead of approx 5K, but apart from that, I'm not able to find any
> leaks or unexpected space consumption.
>
> Suggestions?

Well you could use a StringMap..
 http://homepages.nildram.co.uk/~ahey/HLibs/Data.StringMap/

But that lib a bit lightweight so probably doesn't provide
everyting you need at the moment. But it's something I mean
to get back to when I have some time, so if there's anything
in particular you want let me know and I'll give it some
priority.

You certainly should not need anything like 5k overhead per
map, and you don't have to work via the IO monad either
(though you can use an MVar StringMap or something if you
like).

Also, I seem to remember some thread about some problem
with Data.HashTable implementation and space behaviour.
Unfortunately I can't remember what the problem was and
don't know if it's been fixed :-(

Regards
--
Adrian Hey





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Strictness annotations question

2005-09-14 Thread Adrian Hey
Hello,

One thing I discovered a while ago when fiddling about with optimisting
the AVL library was that making the AVL constructors strict in left
and right sub-tree fields resulted in slower code than using no
strictness annotation and explicit forcing of evaluation with `seq`.

I hope it isn't too presumptious of me to hazard a guess as to what might
cause this :-) My guess is that it's because the AVL code contains a lot
of stuff like this..

-- With strictness annotations
 case blah of
 N l e r -> N l e (f r) -- l and r are left and right sub-trees

or

-- Without strictness annotations
 case blah of
 N l e r -> let r' = f r in r' `seq` N l e r'

Now if the compiler wasn't smart enough to figure out that in the first
example l was already reduced (because it's from a strict field) then
it would get diverted trying to reduce it again (pointlessly accessing
heap record it didn't need, disrupting the cache etc), so the net result
would be slower code.

But in truth I understand precious little about what analyses and
optimisations ghc is capable of, so this could all be complete nonsense.
So is this explanation plausible?

Also (persuing this a little further) it occurs to me that the if this
hypothesis is correct there could be other bad effects. For example,
if in an expression I have a constructor with a strict field, but the
constructor is used in a non-strict context. Presumably the compiler
won't generate this constructor in it's "reduced form" (whatever that
might be for ghc :-) because this would force evaluation of the field
value. So it must construct some kind of thunk instead. But there's
no reason it should be so inhibited if the constructor was non-strict
(or if the compiler could figure out the field value was already
reduced).

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Accessing new heap costs more than old?

2005-09-12 Thread Adrian Hey
Hello Simon,

Thanks for your suggestion (I didn't know about this option)

On Monday 12 Sep 2005 12:04 pm, Simon Marlow wrote:
> On 05 September 2005 19:38, Adrian Hey wrote:
> > On Monday 05 Sep 2005 5:42 pm, Jan-Willem Maessen wrote:
> >> This agrees with my theory.
> >
> > Thanks, I'm sure this must be the explanation. I guess the moral of
> > the story is that for comparative benchmarks you really do need to
> > make sure you're comparing like with like. Very small differences in
> > test method can have a significant impact on running time it seems.
>
> Can you confirm the hypotheses by inspecting the output from +RTS
> -sstderr, which tells you the breakdown of GC vs. mutator time?

OK, I've done the tests and they do seem to confirm this, but
they're a bit misleading because the figures presumably include time
and energy spent calculating the test data too, which is non-trival.
(the Insertion times reported do not include test data calculation).

--- Non-cumulative -
InsertTime = 280.0
1,061,418,244 bytes allocated in the heap
342,983,048 bytes copied during GC
 24,097,548 bytes maximum residency (16 sample(s))

   3652 collections in generation 0 (  3.90s)
 16 collections in generation 1 (  2.98s)

 62 Mb total memory in use

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time   10.73s  ( 10.74s elapsed)
  GCtime6.88s  (  7.01s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time   17.61s  ( 17.75s elapsed)

  %GC time  39.1%  (39.5% elapsed)

  Alloc rate98,920,619 bytes per MUT second

  Productivity  60.9% of total user, 60.5% of total elapsed

--- Cumulative -
InsertTime = 775.0
1,068,399,252 bytes allocated in the heap
546,486,356 bytes copied during GC
 23,234,404 bytes maximum residency (20 sample(s))

   3679 collections in generation 0 (  6.89s)
 20 collections in generation 1 (  4.33s)

 61 Mb total memory in use

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time   11.89s  ( 11.93s elapsed)
  GCtime   11.22s  ( 11.34s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time   23.11s  ( 23.27s elapsed)

  %GC time  48.6%  (48.7% elapsed)

  Alloc rate89,856,959 bytes per MUT second

  Productivity  51.4% of total user, 51.1% of total elapsed

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Accessing new heap costs more than old?

2005-09-05 Thread Adrian Hey
On Monday 05 Sep 2005 5:42 pm, Jan-Willem Maessen wrote:
> This agrees with my theory.

Thanks, I'm sure this must be the explanation. I guess the moral of
the story is that for comparative benchmarks you really do need to
make sure you're comparing like with like. Very small differences in
test method can have a significant impact on running time it seems.

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Accessing new heap costs more than old?

2005-09-05 Thread Adrian Hey
On Saturday 03 Sep 2005 6:14 pm, Adrian Hey wrote:
> But I can't think of a plausible explanation for this. The overall heap
> burn rate should be about the same in each case, as should the overall
> amount of live (non-garbage) heap records.

Hmm.. A little more thought leads me to suspect that this is an artifact
of generational garbage collection, which I guess is much faster if
the youngest generation is entirely garbage. Is that reasonable?

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Accessing new heap costs more than old?

2005-09-03 Thread Adrian Hey
Hello folks,

I wonder if anybody can shed any light on the strange behaviour I've
been seeing with some of my benchmarks. The story is..

While measuring execution times of various AVL routines on random data
I found that insertion was taking far longer than deletion (over twice
as long). This is surprising because if anything deletion is the more
complex operation. Anyway after much struggling and experimentation with
different options/inlining etc I failed to fix this so tried the same
tests with Data.Intmap and got similar unexpected results.

But I've now found that the root cause seems to be a subtle difference
in the two tests. For insertion the test was cummulative, so each new
insertion was on the tree resulting from the previous insertion. But
for deletion the deletion was always done on the same tree. If I modify
the insertion test to work the same way as deletion then sure enough,
insertion is faster than deletion (as I would expect). The same is true
for Data.IntMap too. (The insertion speeds for the two modes differ by
a factor of 2.8..2.9 for both Data.Tree.AVL and Data.IntMap). 

But I can't think of a plausible explanation for this. The overall heap
burn rate should be about the same in each case, as should the overall
amount of live (non-garbage) heap records.

I thought maybe it might be a cache effect, but if anything I would
expect caching to favour the cummulative mode (new allocations should
displace old from the cache AFAIK). Also profiling shows that the
cumulative case performs slightly fewer allocations (as I'd expect
because it starts with an empty tree). 

Anyway, just thought I'd mention it in the hope that there might be
something that can be done about it. The cummulative case seems like
it would be more typical of real world code, so taking a factor of
3 or so performance hit is undesirable IMHO, but may well
be unavoidable :-(

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Question about ghc case analysis

2005-08-09 Thread Adrian Hey
Hello,

Well this is actually a couple of questions..

First question is how does ghc do case analysis on algebraic
data type constructors. I always assumed it was by using some
kind of jump table on a tag field, but I don't know really (I'm
not even sure ghc makes use of tag fields as such).

Second question is really the same question but for literals.
What search algorithm is employed and is it likely to be
worth hand coding something else? (a binary search maybe).

I'm thinking of code like this..
 case n of
 0 ->
 7 ->
 34622 ->
 .. lots more
 _     ->

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Set, Map libraries

2005-06-03 Thread Adrian Hey
On Saturday 04 Jun 2005 1:33 am, Jan-Willem Maessen wrote:
> Replace "4 million" by, say, 2^32 or 2^64 and I think the point stands.
>   The set must fit in your addressable memory, and can thus be counted
> by a similar-sized Int.
>
> Note also that set implementations which cache the size locally will
> have this property in general, whether the rest of the structure is
> strict or not---we'll at least have to compute how many insertions and
> deletions have occurred, and left the thunks kicking around if we
> haven't actually performed them completely yet.

I'm afraid I still don't really understand point we're debating, so
can't comment on whether or not it stands (unless the point is
that you can't deal with sets that won't fit in available memory :-)

Is that all we're discussing here? Or maybe it's a point about word
size used to represent Ints? JPB's remarks about strictness led me
to suspect there might be some unstated algorithmic insight behind
them (like a lazy implementation would not be so limited, or would
offer better performance or space behaviour perhaps). But maybe I
was wrong.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Set, Map libraries

2005-06-03 Thread Adrian Hey
On Thursday 02 Jun 2005 10:18 am, Jean-Philippe Bernardy wrote:
> The definition of the Set datatype being
>
> data Set a= Tip
>
>   | Bin {-# UNPACK #-} !Size a !(Set a) !(Set a)
>
> type Size = Int
>
> It seems your're out of luck when it comes to very large sets.
>
> Also, since the structure is strict, it makes little sense to support
> 4-million-element sets.

I'd be interested to know why you say that. What would you use instead
if you needed 4-million-element sets?

The AVL trees in my implementation are strict and perfectly capable
of supporting such sets. Same should be true of Data.Set too AFAICS.

Regards
--
Adrian Hey  
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Legal package names

2005-03-07 Thread Adrian Hey
Hello,

I've been trying 6.4 on one of my libraries and it seems it doesn't
like my package names. I give my packages the same name as their
place in the module hierarchy.

e.g.
name: Data.COrdering

Is there any reason why this shouldn't be allowed? It seems
so much more convenient this may, at least for packages
the have just one root module.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Unboxed Tuples

2005-02-07 Thread Adrian Hey
On Monday 07 Feb 2005 9:28 am, Simon Peyton-Jones wrote:
> Good point.  Yes you can rely on it; but the binding is lazy.  So for
> this
>
>   h :: Int -> (# String, String #)
>   h = ...
>
>
>   f x = let (# p,q #) = h x in ...
>
> you'll get
>
>   f x = let (p,q) = case h x of (# p,q #) -> (p,q)
>   in ...
>
> So the call to h only happens when either p or q is used.
>
> On the other hand, if any of the binders in a let-pattern has an
> unlifted type (e.g. Int#) then the whole pattern match becomes strict.
> So if p or q had an unlifted type, the match would be strict.
>
> I'll add a note to the usre manual.

Thanks. Sorry if I'm being a bit dim, but could you tell me if ghc
will optimise out the boxed tuple construction in both these cases?

I guess the answer must be yes in the second case because AFAIK
you can't build an ordinary tuple containing an Int#. (But maybe
I'm wrong.)

Regards
--
Adrian Hey





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Unboxed Tuples

2005-01-13 Thread Adrian Hey
Hello,

Does the user guide documentation for these reflect current ghc compiler?

Para. 7.2.2 says they can only be deconstructed using case expressions
but by accident I've found they seem to work fine in let bindings too
(with ghc version 6.2.2).

Not that I'm complaining (I always thought using case expressions was
just too painful :-). I just wanted to check that this is a feature
I can rely on in future (and if so suggest the user guide should be
ammended to reflect this).

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Top level mutable data structures problem

2004-10-20 Thread Adrian Hey
On Wednesday 20 Oct 2004 4:38 pm, Simon Peyton-Jones wrote:
> * "Respecting module dependencies" means that if M imports N (directly
>or indirectly) then N's initialisation is done before M's.  Hi-boot
> recursive
>dependencies are not taken into account; that's where any module
>loops are broken

I was hoping that one day Hi-boot would not be neccessary. So I'm not
sure what ordering you'd get in that case with what's proposed. But
let's not worry about that right now :-)

> As Simon M says, don't hold your breath... but I'd be interested to know
>
> a) whether this story commands a consensus

Well actually at the moment I don't agree with this proposal, but I may be
in a minority of one. That's why I wanted to find out what's happening.
(See my response to Simon M for my reasons).

Wolfgang definitely seems to want this though, but I'd better leave it to
him to explain why.

IMO the one good thing about the unsafePerformIO hack is that there are no
guarantees about whether or when the corresponding actions will occur.
This fact is a powerful disinsentive to abuse of or over reliance on this
feature I think. I.E. People are not likely to use it for any purpose
other than the creation of top level mutable data structures. In particular
they can't rely on any "side effects" of creation if there's no guarantee
that creation will ever occur. (Well not unless they use `seq` somewhere
to force it).

> b) how much happier your life would be if it were implemented

Well IME there are two common uses of unsafePerformIO in my progs..
 1- Creation of top level mutable data structures (which may
live outside Haskell land sometimes).
 2- As a "type cast" for FFI functions which really are pure,
but have type IO anyway because of marshalling etc..

I would really like a solution to useage 1 that didn't use unsafePerformIO,
because it really is _unsafe_. Useage 2 is ugly perhaps, but AFAICS is
perfectly safe (provided the foreign function in question really is a
function), so doesn't require the use of NOINLINE, -fno-cse etc.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Top level mutable data structures problem

2004-10-20 Thread Adrian Hey
On Wednesday 20 Oct 2004 3:46 pm, Simon Marlow wrote:
> I liked the original idea.  I'm not sure if I agree with the argument
> that allowing fully-fledged IO actions in the initialisation of a module
> is unsafe.  I agree that it is a little opaque, in the sense that one
> can't easily tell whether a particular init action is going to run or
> not.

I'm not sure who was arguing that (not me anyway :-). I think the
argument was about whether or not there should be any such thing
as "module initialisation" at all, or at least that's what concerns
me most. (I don't think there should be).

> In any case, we're not going to rush to implement anything.  Discuss it
> some more ;-)

Good plan :-)

FWIW, at the moment the executive summary of MHO is that the laziness
and unpredictability of the unsafePerformIO hack is the one thing I like
about it and want to keep. What I don't like about it is the unsafety.

The unsafety I'm talking about is _not_ that that arises from allowing
arbitrary IO operations. I'm talking about the fact that the compiler
cannot be relied upon to generate code that accurately reflects the
programmers intentions without using NOINLINE pragma and -fno-cse
flag (the latter applying to an entire module).

There's also the (IMO) secondary issue of whether or not the IO actions
allowed during construction should be constrained in any way (e.g. using
type system tricks). I suggested a simple (perhaps naive) way to to this
(SafeIO monad), but I don't have any particularly strong views on this
either way. Just relying on programmers to use some common sense seems
fine to me also (this is what's currently done for finalisers for example).

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Top level mutable data structures problem

2004-10-20 Thread Adrian Hey
Hello,

[Excuse me for moving this discussion to the ghc mailing list,
but it seems the appropriate place, seeing as ghc is where
any solution will happen first in all probability.]

I've noticed that the neither of the two Simons has expressed an
opinion re. the discussions on this issue that occurred on the
Haskell mailing list over the weekend. So I'd like to ask what's
happening (or likely to happen) about this, if anything?

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using packages in ghci

2004-10-02 Thread Adrian Hey
On Friday 01 Oct 2004 9:36 pm, Simon Marlow wrote:
> Looks fine - GHCi is supposed to look in the directories in import_dirs
> for .hi files.  What does ghci -v tell you?

Quite a lot :-), but nothing very surprising. I think I've found what causes
the problem. It does actually seem to work as expected, provided the current
directory is not what it usually is when I'm working on the library.

I.E. /home/adrian/HaskellLibs/Data.Tree.AVL

This is what I get

>Prelude> :m Data.Tree.AVL
>Prelude Data.Tree.AVL> asTreeL "ABCD"
>
>:1:
>tcLookup: `asTreeL' is not in scope
>In the definition of `it': it = asTreeL "ABCD"
>
>Failed to load interface for `Data.Tree.AVL.List':
>Bad interface file: ./Data/Tree/AVL/List.hi
>./Data/Tree/AVL/List.hi: openBinaryFile: does not exist (No such file 
or directory)
>
>Failed to find interface decl for `asTreeL'
>from module `Data.Tree.AVL.List'

But if I cd to..

I.E. /home/adrian/HaskellLibs/Data.Tree.AVL/pkg

..it works fine. I've since discovered it also seems to work fine from
from any other directory too. So it seems to something peculiar about the
one particular directory that upsets it.

This is with version 6.2.20040915, but 6.2.1 did the same thing IIRC.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Using packages in ghci

2004-10-01 Thread Adrian Hey
Hello,

Where does ghci look for .hi files from packages? (I doesn't seem to
be the same place as ghc). AFAICT it expects to find them relative to
the current directory, and I can only get it to work by cding to
the appropriate directory *before* invoking ghci (doing this from
within ghci seems to really mess things up).

But I guess this isn't what's supposed to happen because this
solution will only work with one package.

My package entry looks like this..

Package
   {name = "Data.Tree.AVL",
auto = True,
import_dirs = ["/home/adrian/HaskellLibs/Data.Tree.AVL/pkg"],
source_dirs = [],
library_dirs = ["/home/adrian/HaskellLibs/Data.Tree.AVL/pkg"],
hs_libraries = ["Data.Tree.AVL"],
extra_libraries = [],
include_dirs = [],
c_includes = [],
package_deps = ["base", "Data.COrdering"],
extra_ghc_opts = [],
extra_cc_opts = [],
extra_ld_opts = [],
framework_dirs = [],
    extra_frameworks = []}

Is there something missing here?

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Release candidate for 6.2.2 availabe

2004-09-15 Thread Adrian Hey
Hello,

I thought we were going to get a new FFI function soon..
 http://www.haskell.org/pipermail/ffi/2004-March/001740.html

AFAICS it isn't in the libs for this release :-(
Has this got lost or forgotten about?

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: FW: Space usage

2004-08-18 Thread Adrian Hey
On Wednesday 18 Aug 2004 4:56 pm, Simon Peyton-Jones wrote:
> It does; see my reply below

Oops, sorry. It seems I missed that for some reason.

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Space usage

2004-08-18 Thread Adrian Hey
On Tuesday 17 Aug 2004 5:11 pm, Malcolm Wallace wrote:
> It is also possible to use Wadler's
> garbage-collector fix for this space leak, as implemented in nhc98.
> P Wadler, "Fixing a Space Leak with a Garbage Collector", SP&E Sept
> 1987.
>
> When the GC discovers a selector function applied to an evaluated
> argument, it "evaluates" the selector on-the-fly by just swizzling
> pointers.  It needs some co-operation from the compiler to make
> selector functions look obvious, but that isn't too difficult.

So ghc doesn't do this (or something better)? I'm surprised
because it seems like a really basic and simple feature to me. I
implemented a toy FPL a few years ago and even my gc incorporated
this optimisation. It's a bit strange that this should have been
overlooked considering in all other respects ghc is far more
sophisticated than my efforts were :-)

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Understanding strictness of ghc output

2004-06-22 Thread Adrian Hey
On Tuesday 22 Jun 2004 2:28 pm, Simon Peyton-Jones wrote:
> The DmdType for the Int# is indeed "L" but that's irrelevant because
> Int# values are always evaluated.  The demand info is always L for an
> unboxed type.

Thanks, I had noticed it did appear to have decided h was unboxed
(assuming my interpretation of core was correct), so it seemed rather
strange that addHeight could be lazy (non-strict) in that argument.

So does L mean "Lazy", or something else?

Thanks
--
Adrian Hey  
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Understanding strictness of ghc output

2004-06-22 Thread Adrian Hey
Hello,

I'm trying to figure out how you tell if ghc has correctly infered
strictness or whether or not a little more prompting from me
is needed.

I tried compiling with -ddump-simpl, and I guess from looking
at this the DmdType bit is what I want (maybe). So if I have
"DmdType LS" for a function of arity 2, does this mean the
function is lazy in the first argument and strict in the second?

I would be pretty confident that this was the correct interpretation,
but this is the Haskell code (from AVL library)..

height :: AVL e -> Int
height = addHeight 0 where
 addHeight h  E= h
 addHeight h (N l _ _) = addHeight h+2 l 
 addHeight h (Z l _ _) = addHeight h+1 l  
 addHeight h (P _ _ r) = addHeight h+2 r 

It seems pretty obvious to me that addHeight is strict in its
first argument if + is strict for Ints (as I guess it is). But this
gives "DmdType LS".

Even if I rewrite it..

height :: AVL e -> Int
height = addHeight 0 where
 addHeight h  E= h
 addHeight h (N l _ _) = let h' = h+2 in h' `seq` addHeight h' l 
 addHeight h (Z l _ _) = let h' = h+1 in h' `seq` addHeight h' l  
 addHeight h (P _ _ r) = let h' = h+2 in h' `seq` addHeight h' r 

.. it still gives "DmdType LS".

So does this..

height :: AVL e -> Int
height = addHeight 0 where
 addHeight h  E= h
 addHeight h (N l _ _) = h `seq` addHeight (h+2) l 
 addHeight h (Z l _ _) = h `seq` addHeight (h+1) l  
 addHeight h (P _ _ r) = h `seq` addHeight (h+2) r

So am I interpreting core correctly?

Thanks
--
Adrian Hey




 




___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Haddock bug?

2004-05-06 Thread Adrian Hey
Hello,

There seems to be a bug in Haddock 0.6, which causes it not to list
indexes which don't contain an upper case identifier. 

At the moment I'm fixing the problem by creating a dummy data type
in my top level wrapper module..

 data Dummy = A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z

..which seems to fix this problem, but I'd rather not have to do this:-)
Is there a better work around for this?

Also, reading the release notes, there's mention of --gen-contents
and --use-contents flags, but these don't seem to be documented in
the user guide AFAICS.

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Inlining question

2004-04-19 Thread Adrian Hey
On Monday 19 Apr 2004 11:27 am, Adrian Hey wrote:
> Perhaps I was doing something stupid.

Yep, I must of been doing something stupid. I've just tried it again and I
do get different object files. In fact inlinining seems to give me smaller
object files (not that I'm complaining :-).

Sorry everybody. I can't think what I must have done first time around
(maybe I spelled INLINE wrong or forgot to save the file I was editing
of something).

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Inlining question

2004-04-19 Thread Adrian Hey
On Monday 19 Apr 2004 9:52 am, Simon Peyton-Jones wrote:
> | > Does inlining work with any function definitions (including those
> | > defined locally following a where.. say), or only with top level
> | > function definitions?
>
> If you mean
>
>   In GHC, if I use an {-# INLINE #-} pragma on a
>   nested function definition, should it be inlined?

Yes, sorry if I wasn't clear, that is what I mean.

> then the answer is yes, unless the function is recursive in which case
> the pragma is ignored.

The function is indirectly recursive, in that I have a pair of mutually
recursive functions, one of which I want to inline (so it becomes a single
recursive function).

Of course this means I'd have to move them both to the top level if that
was required to inline one of them, or maybe add another function
argument to the one being lifted (but this shouldn't be necessary anyway
going by what you've just told me).

> Do you have reason to suppose the contrary?  If so, do send a test case

Well I'm just going by comparison of the resulting object files, but it
seemed that they were identical whether I used {-# INLINE foo #-},
{-# NOINLINE foo #-}, or nothing at all. If you think that shouldn't be
the case I'll have another go. Perhaps I was doing something stupid.

I have another question if I may. Does inlining occur before or after
strictness analysis? (and would it make any difference to the results
of strictness analysis anyway?)

I imagine it would make a difference of the function being inlined takes
other functions with known strictness properties as arguments, but I may
well be wrong :-)

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Inlining question

2004-04-15 Thread Adrian Hey
On Tuesday 13 Apr 2004 10:10 am, Adrian Hey wrote:
> Hello,
>
> Does inlining work with any function definitions (including those
> defined locally following a where.. say), or only with top level
> function definitions?

Well, as far as I can tell from my experiments, it does only work
for for top level definitions (or to be more precise, it seems not
to work with local definitions).

I assume it does work with top level definitions, so to get (what
would otherwise be) local definitions inlined I have to lambda lift
them manually to the top level?

The remark in the manual about inline pragmas occuring anywhere where
a type signature could occur would seem to confirm this (for Haskell
98 at least).

I'd be grateful if somebody who knows could confirm this (or not perhaps).
I don't want to uglify my code if it's not necessary :-(

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Inlining question

2004-04-13 Thread Adrian Hey
Hello,

Does inlining work with any function definitions (including those
defined locally following a where.. say), or only with top level
function definitions?

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using -split-objs

2004-03-16 Thread Adrian Hey
On Monday 15 Mar 2004 1:09 pm, Simon Marlow wrote:
> > The ghc users guide contains this helpful advice about split-objs..
> >
> >  "don't use it unless you know what you're doing!"
> >
> > OK, I admit I don't know what I'm doing. But despite this handicap
> > I would like to use split-objs to build a library. I've already
> > discovered it doesn't work with --make, so I guess I have to use
> > a makefile.
> >
> > Are there any other gotchas waiting for me?
> > Anything else I need to know?
>
> You'll probably need special Makefile rules to handle it too.  I think
> you need to make the subdirectory for the objects (M_split/) beforehand,
> and touch the object file M.o afterward (so that make dependencies still
> work).

Thanks, I'm not sure exactly what you had in mind, but I ended up
with a rule like this..

HC_OPTS = -Wall -O -split-objs -no-recomp -i$(PKGDIR) -hidir $(PKGDIR) -odir 
$(ODIR)

#How to build (real & dummy) %.o files
$(ODIR)/%.o: %.hs
@echo Compiling: $<
@rm -f $(ODIR)/$(*F)__*.o # Delete the old split objects
@$(HC) -c $(HC_OPTS) $<   # Create the new split objects
@touch $@  # Create/Update the dummy object

There seem to be 2 gotchas..
 1- -split-objs has no effect when creating dependencies using the -M option
The resulting *.o targets will not be created when actually compiling,
(instead you get lots of *__*.o files)
 2- These *__*.o files are not placed in the directories one would expect
from hierarchical modules. They all get placed in the root -odir.

To get around this the above rule uses touch to create a dummy object file
in the place where the real object file would be placed if -split-objs had not
been used. This seems to make the dependencies generated using -M option work
OK. I guess this must be what you meant (something like it anyway:-).

Regards
--
Adrian Hey









 







___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Using -split-objs

2004-03-15 Thread Adrian Hey
Hello,

The ghc users guide contains this helpful advice about split-objs..

 "don't use it unless you know what you're doing!"

OK, I admit I don't know what I'm doing. But despite this handicap
I would like to use split-objs to build a library. I've already
discovered it doesn't work with --make, so I guess I have to use
a makefile.

Are there any other gotchas waiting for me?
Anything else I need to know?

Thanks
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: How to write this correct...

2004-02-22 Thread Adrian Hey
On Sunday 22 Feb 2004 10:03 am, Hans Nikolaus Beck wrote:
> Hi,
>
> I have the following problem:
>
> Vertex3 ist defined as
>
> data Vertex3 a = Vertex3 a a a
>
> a is defined as
>
> class a VertexComponent
>
> But I fail to write to following correct
>
> type GLVertex = (GLfloat, GLfloat, GLfloat)
>
> toVertex :: GLVertex -> Vertex3 a <<<<<<<< how do it correctly
> toVertex (x,y,z) = Vertex3 x y z
>
> The compiler says "cannot infer a with type GLFloat" or something like
> this I don't understand.
>
> Thank you for help

The type signature is wrong. Try this..
 toVertex :: GLVertex -> Vertex3 GLfloat

or perhaps..
 toVertex :: (a,a,a) -> Vertex3 a

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: threadDelay problem

2004-01-10 Thread Adrian Hey
On Wednesday 07 Jan 2004 8:01 pm, John Meacham wrote:
> On Wed, Jan 07, 2004 at 12:38:11PM -, Simon Marlow wrote:
> > > The idea is to stop a Haskell prog with no runable threads or events
> > > to process hogging CPU time. I was a bit dissapointed to find
> > > that despite
> > > this on my (otherwise idle) machine the above code was still
> > > using 96% of cpu time.
> > >
> > > But fiddling about a bit I found that by changing the thread
> > > delay in the
> > > main loop to 20 mS the CPU usage dropped to 1% or so, which
> > > is much more
> > > acceptable (well it did say 1/50 th second accuracy in the docs :-).
> >
> > Yes, threadDelay will round the interval down to the nearest multiple of
> > the resolution, so you were effectively using threadDelay 0.  This ought
> > to be mentioned in the documentation.
>
> this seems like the incorrect (and inconsistant with system interfaces)
> approach. rounding up is the norm. alarm and sleep both say you will
> sleep at least as long as the time specified. and threadDelay should
> behave similarly. it is the more useful semantics anyway, since you are
> usually waiting for an event to occur, and it is pretty much always okay
> to wait to long, but can be bad (as in this case) to wait to short.
> John

I'll vote for that too. It seems unlikely that this change would break
any existing code and, as John has says, waiting for longer than specified
seems like the lesser of two possible evils (especially when there's
no absolute guarantee that the thread will resume as soon as the
specified delay time has elapsed in any case).

Regards
--
Adran Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2004-01-07 Thread Adrian Hey
On Wednesday 07 Jan 2004 11:43 am, Simon Marlow wrote:
> > Hmm, further experiments with creating zillions of garbage
> > ForeignPtrs (not just 1) reveals that the problem only occurs
> > if *no* garbage collection has occured before the program shuts
> > down. In other words, as long as at least one garbage collection
> > has occured, it doesn't matter if library shutdown occurs immediately
> > in response to killLibRef or if it's deferred until the reference
> > count hits zero as a result of finalisers being called. (This test
> > is without the explicit performGC of course.)
> >
> > So (hoping I will not have to eat my words:-) I'm begining to suspect
> > this is a buglet in the ghc rts somewhere.
>
> It may be a bug; I can't see anything obviously wrong in your code.  The
> best way to proceed is for you to send us a complete test case which is
> producing the (claimed) incorrect output, and we'll look into it.

Oops, I've modified all the code to use Carl Witty's suggested solution.
(seems like the simplest portable solution, all things considered).

I'll see if I can put something back together to demonstrate the problem.
(might be a day or two though).

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


threadDelay problem

2004-01-06 Thread Adrian Hey
Hello,

Whilst experimenting with concurrency and a library which expects me to
use the usual event loop I wrote some test code like this..  

main :: IO ()
main = do
  forkIO forked
  loop

-- Main Loop
loop :: IO ()
loop = do
  maybeEvent <- pollEvent-- non-blocking
  case maybeEvent of
Just event -> do dispatch event
 loop
Nothing-> do threadDelay 1 -- 1 uS = 10 mS
 loop

-- Concurrent thread for test purposes
forked :: IO ()
forked = do threadDelay 100 -- 100 uS (= 1 S)
putStrLn "Hello"
forked

The idea is to stop a Haskell prog with no runable threads or events
to process hogging CPU time. I was a bit dissapointed to find that despite
this on my (otherwise idle) machine the above code was still using 96% of
cpu time.

But fiddling about a bit I found that by changing the thread delay in the
main loop to 20 mS the CPU usage dropped to 1% or so, which is much more
acceptable (well it did say 1/50 th second accuracy in the docs :-).

So this all seems OK, but I wonder if this is a reliable solution on all
platforms ghc is ported to (given the the consequence of getting this wrong
can be a very greedy program). Is the 20 mS resolution figure a function of
ghc only? Or might it depend on other factors too (like OS). Or might it
change with ghc version?

I think it would nice to have some reliable way to ensure that in reality
the thread delay will be as small as possible, but always greater than zero.
Maybe a threadDelayResolution constant could be provided?

Also, assuming that currently the (hypothetical) threadDelayResolution is
2 and that in reality the actual threadDelay is N*2 where N is an
integer, I think the docs for threadDelay should specify how N is calculated
from the argument of threadDelay. At the moment chosing a thread delay of
1 sends my CPU usage right back up to 96%, though whether that's as
specified or merely a property of ghc 6.2 under Redhat 9 seems a little
ambiguous :-)

BTW, the docs for Control.Concurrent mention something about an -i RTS
option, but I can't find any info on this in the user guide. When I try
it I get an error, listing all RTS options. There's no -i listed but there
is something that looks related, a -C option, but this doesn't appear in
the ghc user guide either :-(

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2004-01-06 Thread Adrian Hey
Hello,

Thanks for that explanation, I see the problem now.

Though to be honest, I'm not convinced that the situation
for Haskell implementations which don't implement pre-emptive
concurrency need be as bad as you suggest. But that's probably
presumptious of me seeing as I know little about the
implementations of either Hugs or nhc (or ghc for that matter:-)

I can see there is a potential problem with single threaded programs
which may never call yield, though even in this situation it I would
think it would be relatively straight forward to have an implicit
yield to finalisers at appropriate points (no partially reduced
thunks in the heap).

But then again, I guess the logic is that since foreign object
finalisers will usually be foreign functions which don't re-enter
Haskell it's probably not worth the effort.

The other thing that strikes me about this is don't we also have
the same potential problem with weak pointer finalisers? Can they
be supported in Haskell without pre-emptive concurrency?

Regards
--
Adrian Hey 

On Monday 05 Jan 2004 4:39 pm, Alastair Reid wrote:
> > > I'm afraid I still don't fully understand why Haskell
> > > finalisers are unsafe or why (if) calling Haskell from a C finaliser
> > > (which then called C land again) would be any safer.
>
> The FFI standard doesn't say that calling C finalizers is unsafe (which
> would imply that the finalizers potentially do something naughty).  Rather,
> the standard says that finalizers are called in a restricted context in
> which they are not allowed to call back into Haskell.
>
> The reason that finalizers must be written in C and cannot call into
> Haskell is that it requires pretty much all the machinery needed to
> implement preemptive concurrency (multiple C stacks, context switches,
> etc.) which was felt to be an excessively high burden on a Haskell
> implementation just to let you call C functions.  (Of course, GHC already
> has this machinery which is why they provide enhanced functionality.)
>
> Why does it require most of the machinery of preemptive concurrency?
> Suppose that a finalizer requires the value of something that is currently
> being evaluated by the main thread.  (This is very common and pretty much
> impossible to reason about in Haskell.  For example, it could be a
> dictionary object or the thunk '(==) dict_Int'.)  The correct thing to do
> if this happens is to block the finalizer, run the main thread until the
> shared thunk is updated with a value, and then restart the finalizer.  To
> block a thread in this way, we have to switch C stacks, perform a context
> switch, etc.  QED.
>
> --
> Alastair Reidhaskell-consulting.com


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2004-01-06 Thread Adrian Hey
On Monday 05 Jan 2004 3:14 pm, Simon Marlow wrote:
> > The other complication I can see is that ForeignPtr finalisers can't
> > be Haskell. So I have to call the Haskell finalisation from C.
> > Is that safe? I'm afraid I still don't fully understand why Haskell
> > finalisers are unsafe or why (if) calling Haskell from a C finaliser
> > (which then called C land again) would be any safer.
>
> If you don't mind your code being non-portable, then Foreign.Concurrent
> provides Haskell finalisers.

Oh yes, so it does :-) I'd just been looking at the FFI documentation
(only). Thanks for pointing that out.

> This support will be available only on
> Haskell implementations which implement pre-emptive concurrency (i.e.
> just GHC for now).

OK, I think understand now thanks to Alistair Reids explanation. I had
been trying to keep my code portable (it's a library binding I hope
to make available to Haskell folk sometime soon). But this seems
to be quite difficult. AFAICS the situation is that the only really
portable solution to this problem is for the reference counting thing
(or doubly linked lists or whatever) to be done in C (which I guess is
what everybody's been saying all along :-).

Regards
--
Adrian Hey  


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2004-01-01 Thread Adrian Hey
On Wednesday 31 Dec 2003 10:05 am, Adrian Hey wrote:
> On Wednesday 31 Dec 2003 8:56 am, Adrian Hey wrote:
> > Intended use is something like this...
> >
> > {-# notInline libXYZRef #-}
> > libXYZRef :: LibRef
> > libXYZRef = unsafePerformIO newLibRef
> >
> > main :: IO ()
> > main = finally (initLibXYZ >> userMain) (killLibRef libXYZRef
> >  shutDownLibXYZ)
> > -- initLibXYZ and shutDownLibXYZ are Haskell bindings to functions
> > supplied -- by libXYZ
>
> Actually, using..
>  main = finally (initLibXYZ >> userMain)
> (performGC >> killLibRef libXYZRef shutDownLibXYZ)
>
> seems to fix the problem, which isn't too surprising I guess.
> But then again, if this is a reliable solution there's no need
> for LibRef after all :-)

Hmm, further experiments with creating zillions of garbage
ForeignPtrs (not just 1) reveals that the problem only occurs
if *no* garbage collection has occured before the program shuts
down. In other words, as long as at least one garbage collection
has occured, it doesn't matter if library shutdown occurs immediately
in response to killLibRef or if it's deferred until the reference
count hits zero as a result of finalisers being called. (This test
is without the explicit performGC of course.)

So (hoping I will not have to eat my words:-) I'm begining to suspect
this is a buglet in the ghc rts somewhere.

Regards
--
Adrian Hey





   


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-31 Thread Adrian Hey
On Wednesday 31 Dec 2003 8:56 am, Adrian Hey wrote:
> Intended use is something like this...
>
> {-# notInline libXYZRef #-}
> libXYZRef :: LibRef
> libXYZRef = unsafePerformIO newLibRef
>
> main :: IO ()
> main = finally (initLibXYZ >> userMain) (killLibRef libXYZRef
>  shutDownLibXYZ)
> -- initLibXYZ and shutDownLibXYZ are Haskell bindings to functions supplied
> -- by libXYZ

Actually, using..
 main = finally (initLibXYZ >> userMain)
(performGC >> killLibRef libXYZRef shutDownLibXYZ)

seems to fix the problem, which isn't too surprising I guess.
But then again, if this is a reliable solution there's no need
for LibRef after all :-)

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-31 Thread Adrian Hey
On Wednesday 31 Dec 2003 8:56 am, Adrian Hey wrote:
> The problem is I get a "fail: <>" error if no garbage collection
> has occured when killLibRef is called (I.E. killLibRef saves shutDownLibXYZ
> for later use because the reference count is non-zero).

Sorry, I should clarify this. The error does not occur when
killLibRef is called, it occurs sometime after that (during
the final rts cleanup and execution of any outstanding
finalisers I guess).

Regards
--
Adrain Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-31 Thread Adrian Hey
Hello again,

I've tried the simplest possible reference counting approach which should
be OK if all finalisers are run eventually (as I think is the case currently
with ghc 6.2).

But I don't seem to be able to get it to work. I've attached the library
reference counting code (LibRef module) to the end of this message.

Intended use is something like this...

{-# notInline libXYZRef #-}
libXYZRef :: LibRef
libXYZRef = unsafePerformIO newLibRef

main :: IO ()
main = finally (initLibXYZ >> userMain) (killLibRef libXYZRef shutDownLibXYZ)
-- initLibXYZ and shutDownLibXYZ are Haskell bindings to functions supplied
-- by libXYZ

userMain :: IO ()
-- userMain creates ForeignPtrs to library objects using addLibRef  

I'm testing by creating 1 ForeignPtr reference using addLibRef and
dropping it immediately thereafter (so it's garbage, but not detected
as such immediately). Running with the -B rts option tells me when
garbage collection has occured.

The problem is I get a "fail: <>" error if no garbage collection
has occured when killLibRef is called (I.E. killLibRef saves shutDownLibXYZ
for later use because the reference count is non-zero).

But everything works fine if I wait for garbage collection to occur before
calling killLibRef.  

Does anybody have any idea what might be going wrong here?

Personally I'm a bit suspicious of the use of the cToH and hToC functions
in addLibRef, but I'm not aware of any alternative if you want to mix in
some Haskell code with a finaliser.

Thanks for any advice. LibRef code follows below..

module LibRef
(LibRef  -- data LibRef
,newLibRef   -- IO LibRef
,addLibRef   -- LibRef -> FinalizerPtr a -> Ptr a -> IO (ForeignPtr a)
,killLibRef  -- LibRef -> IO () -> IO ()
) where

import Data.IORef
import Foreign.Ptr
import Foreign.ForeignPtr
import Control.Concurrent.MVar

foreign import ccall "dynamic" cToH :: FinalizerPtr a -> (Ptr a -> IO ())
foreign import ccall "wrapper" hToC :: (Ptr a -> IO ()) -> IO (FinalizerPtr a)

newtype LibRef = LibRef (MVar Int-- Reference count (and lock)
,IORef (IO ())   -- Shutdown action
)

-- Create a new LibRef
newLibRef :: IO LibRef
newLibRef = do
  countRef  <- newMVar 0 -- No references
  killitRef <- newIORef $ return ()  -- No shutdown action initially
  return $ LibRef (countRef,killitRef) 

-- Similar to newForeignPtr. Creates a ForeignPtr reference to a library
-- object and increments the LibRef reference count. The actual finaliser
-- used runs the suppied finaliser (second arg) and then decrements the
-- LibRef reference count.
addLibRef :: LibRef -> FinalizerPtr a -> Ptr a -> IO (ForeignPtr a)
addLibRef libRef@(LibRef (countMVar,_)) finalise ptr = do
  finalise' <- hToC $ \p -> do cToH finalise p
   decLibRef libRef
  count <- takeMVar countMVar   -- Read (and lock)
  putMVar countMVar $! (count+1)-- Increment (and unlock)
  newForeignPtr finalise' ptr

-- Decrement a LibRef reference count. If the resulting reference
-- count is zero whatever action is stored in killitRef is executed
-- (and killitRef is reset to return ()) 
decLibRef :: LibRef -> IO ()
decLibRef (LibRef (countMVar,killitRef)) = do
  putStrLn ""
  count <- takeMVar countMVar-- Read and lock
  case count of
0 -> error "decLibRef applied to zero reference count"
1 -> do killit <- readIORef killitRef-- Get configured kill
writeIORef killitRef $ return () -- Reset killitRef 
putMVar countMVar 0  -- Reset and unlock
killit   -- Kill it
putStrLn ""
_ -> putMVar countMVar $! (count-1)  -- Decrement and unlock

-- Call this when the library is no longer needed.
-- Second Arg is library shutdown action. This is performed immediately
-- if reference count == 0. Otherwise it is stored and executed by the
-- last finaliser (when reference count hits 0). 
killLibRef :: LibRef -> IO () -> IO ()
killLibRef (LibRef (countMVar,killitRef)) killit = do
  count <- takeMVar countMVar-- Read and lock
  if count == 0 then do writeIORef killitRef $ return () -- Reset killitRef
putMVar countMVar count  -- Unlock
killit   -- Execute now
putStrLn ""        
else do writeIORef killitRef killit  -- Save for later
putMVar countMVar count  -- Unlock
putStrLn ""

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-28 Thread Adrian Hey
On Tuesday 23 Dec 2003 7:22 am, Adrian Hey wrote:
> Assuming the weak pointers solution is the way to go, I've been
> re-aquainting myself with System.Mem.Weak and now I'm now wondering
> what is an appropriate key for each ForeignPtr.
>
> Would it be OK to use the ForeignPtr itself as it's own key?
> (Seems OK to me, but this is a bit different from the memoisation
> example so I thought I'd check.)

I guess I should've taken a look at
 mkWeakPtr :: k -> Maybe (IO ()) -> IO (Weak k)
before asking this :-)

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-23 Thread Adrian Hey
Hello

On Tuesday 23 Dec 2003 9:27 am, Simon Marlow wrote:
> > Assuming the weak pointers solution is the way to go, I've been
> > re-aquainting myself with System.Mem.Weak and now I'm now wondering
> > what is an appropriate key for each ForeignPtr.
>
> Before we go down that route, I want to be sure that it's actually
> necessary to use weak pointers.  It sounds like your application has the
> following properties:
>
>   - there is a library that can allocate some resources, where
> each resource is represented by a ForeignPtr

Basically, but there are also some hardware resources (other than memory)
which are claimed just as a result of library initialisation (before any
library objects have been created).

>   - a resource needs to be released when it is no longer referenced

Yes, that's right.

>   - at some point, we would like to free *all* outstanding resources
> (either at the end of the program, or when the library is no
> longer required).

I want to free all heap space used by library objects, then free whatever
other hardware resources have been claimed by the library (by calling
the appropriate shutdown routine).

> If this is the case, I'd do it something like this:
>
>   - keep a global list of the pointers still to be released, probably
> a doubly-linked list.  Lock the whole thing with an MVar.  Elements
> are Ptrs, not ForeignPtrs.
>
>   - the finaliser on each ForeignPtr removes the corresponding Ptr from
> the list.
>
>   - the final cleanup routine explicitly releases all the remaining
> Ptrs in the list, holding the MVar lock as it does so to avoid
> race conditions with finalisers.
>
> Weak pointers aren't required, AFAICT.

Maybe, I'd forgotten that I could get at the Ptr inside each ForeignPtr.
I guess I've still got to think about the consequences of ForeignPtr
finalisers being run after the "final" shutdown. (Making each
List cell an IORef (Maybe something) would do that I think).

The other complication I can see is that ForeignPtr finalisers can't
be Haskell. So I have to call the Haskell finalisation from C.
Is that safe? I'm afraid I still don't fully understand why Haskell
finalisers are unsafe or why (if) calling Haskell from a C finaliser
(which then called C land again) would be any safer. 

Thanks for the idea though. I'll play about with a few implementations
of these ideas after christmas and see what problems I encounter.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-23 Thread Adrian Hey
On Monday 22 Dec 2003 8:53 pm, Carl Witty wrote:
> >  > Thanks for your reply. I'm afraid it's left me even
> > >
> > > more confused about which way to go with this :-(
>
> Is your problem something you could handle with a C atexit() handler?

That's a good idea. With ghc I guess this will work, assuming..
1- ghc rts runs all ForeignPtr finalisers before it shutsdown.
2- ghc rts is shutdown before atexit handlers are executed. 

I both think 1 & 2 are true with ghc at present, but Simon M.
indicated that 1 might not be true in future for ghc (or other
Haskell implementations). That said, the current FFI spec
states at bottom of p.14..

"There is no guarantee on how soon the finalizer is executed
after the last reference to the associated foreign pointer
was dropped; this depends on the details of the Haskell storeage
manager. The only guarantee is that the finalizer runs before
the program terminates."

So I'm still confused :-)

Actually, though I think it would work for me, it's probably
not as general as some folk might want (they might want to
shutdown the library and free up whatever resources it claimed
earlier in program execution, not just at exit).

Regards
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-23 Thread Adrian Hey
On Monday 22 Dec 2003 10:13 am, Simon Marlow wrote:
> > Thanks for your reply. I'm afraid it's left me even
> > more confused about which way to go with this :-(
> >
> > If it's possible that future Haskell FFI's don't guarantee
> > that all finalisers are run then this more or less rules
> > out the use of the reference counting solution (which
> > wasn't particularly attractive anyway because it needs to
> > be done in C AFAICS :-). If users who want this behaviour
> > are required to code it themselves, it seems to require that
> > they maintain a global list of all allocated ForeignPtrs.
> > But doing that naively will stop them being garbage collected
> > at all, unless it's possible to do something clever using weak
> > pointers. Perhaps it is possible (or maybe some tricks at the
> > C level could be used) but I think it's a significant extra
> > burden for FFI users.
>
> Yes, it would have to be a global list of weak pointers to ForeignPtrs.
> This topic has come up before, though not on this list.  See this
> message, and the rest of the thread:
>
> http://www.haskell.org/pipermail/cvs-ghc/2003-January/016651.html
>
> the thread also moved on to [EMAIL PROTECTED]:
>
> http://www.haskell.org/pipermail/ffi/2003-January/001041.html
>
> and be sure to check out the paper by Hans Boehm referenced in that
> message, it's a good summary of the issues involved.

Thanks, I'll take a look at the Boehm paper. I didn't keep up with
this discussion at the time, but now I see the relevance. 

Assuming the weak pointers solution is the way to go, I've been
re-aquainting myself with System.Mem.Weak and now I'm now wondering
what is an appropriate key for each ForeignPtr.

Would it be OK to use the ForeignPtr itself as it's own key?
(Seems OK to me, but this is a bit different from the memoisation
example so I thought I'd check.)

If so, then I guess the thing to do is to maintain a mutable doubly
linked list of Weak pointers to ForeignPtrs using IORef's and have
the finaliser for each weak pointer "short out" the corresponding
list cell. When the program terminates execute the finalisers
of all ForeignPtrs which remain in this list.

Hmm, this is getting awfully complicated, and I still have my
doubts about it for a couple of reasons..

1- Executing ForeignPtr finalisers directly (as in Krasimirs
   example) seems to be ghc specific.
2- If there is no guarantee whether or when ForeignPtr finalisers
   are run then it seems that it is possible that a Weak pointer
   finaliser has been run (thereby deleting the weak pointer
   reference from the list), but the corresponding ForeignPtr
   finaliser has *not* been run.

The solution to problem 2 would seem to be to not associate
any finaliser with with the ForeignPtr, but do all finalisation
in the Weak pointer finaliser. I guess that would cure problem
1 too.

What do folk think about this?   

> performGC doesn't do anything that you can rely on :-)

Oh, that's handy :-)

> > Also, I could you explain what you mean by a suitable
> > exception handler? I don't really understand this at all.
> > I'd expected I may well end up using bracket or similar,
> > but I'm not sure how exception handling is relevant to
> > this problem.
>
> Start your program something like this:
>
>   import Control.Exception (finally)
>
>   main = my_main `finally` clean_up
>   my_main = ... put your program here ...
>   clean_up = ... all the cleanup code goes here ...
>
> You can additionally use finalizers to perform incremental cleanup
> during program execution, but the right way to clean up at the end is to
> use an exception handler as above.

Ah OK, I was hoping the whole thing would be something as simple
as this..

withLibXYX :: IO () -> IO ()
withLibXYZ doit = finally (initialiseLibXYZ >> doit)
      (performGC >> shutdownLibXYZ)

Where initialiseLibXYZ and shutdownLibXYZ are simple foreign functions
imported from libXYZ. I think it's a real shame performGC or some other
similar function can't simply guarantee that all (garbage) ForeignPtr
finalisers have been run before calling shutdownLibXYZ :-(

Regards
--
Adrian Hey




___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Running a "final" finaliser

2003-12-19 Thread Adrian Hey
Thanks for your reply. I'm afraid it's left me even
more confused about which way to go with this :-(

If it's possible that future Haskell FFI's don't guarantee
that all finalisers are run then this more or less rules
out the use of the reference counting solution (which
wasn't particularly attractive anyway because it needs to
be done in C AFAICS :-). If users who want this behaviour
are required to code it themselves, it seems to require that
they maintain a global list of all allocated ForeignPtrs.
But doing that naively will stop them being garbage collected
at all, unless it's possible to do something clever using weak
pointers. Perhaps it is possible (or maybe some tricks at the
C level could be used) but I think it's a significant extra
burden for FFI users.

Also, while we're talking about this, maybe the semantics
of performGC should be clarified. Does it block until
all GC (and finalisation of garbage ForeignPtrs) is complete?
I would guess this was the original intention, but this
doesn't seem to be consistent with non-stop Haskell. If
it does block, are all Haskell threads blocked, or just
the calling thread?

Also, I could you explain what you mean by a suitable
exception handler? I don't really understand this at all.
I'd expected I may well end up using bracket or similar,
but I'm not sure how exception handling is relevant to
this problem.   
 
Thanks
--
Adrian Hey

On Thursday 18 Dec 2003 1:59 pm, Simon Marlow wrote:
> > I hope this question isn't too stupid, but I can't find
> > any obvious way to do this from reading the ghc docs.
> >
> > What I want to do is call a final foreign function (a
> > library shutdown routine) when Haskell terminates, but
> > after all ForeignPtr finalisers have been run.
> >
> > I suppose I could implement some kind of finaliser
> > counter so the last finalizer could tell it was the
> > last finaliser and call the shutdown routine, but this
> > seems a little awkward, especially as these days the
> > FFI requires finalisers to be foreign functions.
> >
> > The other possibility that occurs to me is that I call
> > performGC at the very end of the program, immediately
> > before calling the library shutdown routine. But I'm
> > not too sure whether that will guarantee that all
> > finalizers have been run even if there are no live
> > references to foreign objects at that point. (Using
> > GC as described in the "non-stop Haskell" paper it
> > seems possible that finalisers won't be run immediately
> > in response to performGC.)
>
> Using an explicit reference count sounds fine to me.  The runtime system
> doesn't support any ordering constraints between finalizers (it's a
> really hard problem in general), so the party line is "you have to code
> it up yourself".
>
> Actually, I seem to recall that we were going to disable the running of
> finalizers at the end of the program completely, in which case you would
> have to add your cleanup code in the main thread, with an appropriate
> exception handler.
>
> Cheers,
>   Simon
> ___
> Glasgow-haskell-users mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Running a "final" finaliser

2003-12-18 Thread Adrian Hey
Hello,

I hope this question isn't too stupid, but I can't find
any obvious way to do this from reading the ghc docs.

What I want to do is call a final foreign function (a
library shutdown routine) when Haskell terminates, but
after all ForeignPtr finalisers have been run.

I suppose I could implement some kind of finaliser
counter so the last finalizer could tell it was the
last finaliser and call the shutdown routine, but this
seems a little awkward, especially as these days the
FFI requires finalisers to be foreign functions.

The other possibility that occurs to me is that I call
performGC at the very end of the program, immediately
before calling the library shutdown routine. But I'm
not too sure whether that will guarantee that all
finalizers have been run even if there are no live
references to foreign objects at that point. (Using
GC as described in the "non-stop Haskell" paper it
seems possible that finalisers won't be run immediately
in response to performGC.)

Is there a better way?

Thanks
--
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: DiffArray Performance

2003-10-28 Thread Adrian Hey
Hello again,

Another thought..

Could it be that sTree0 is cyclic that accounts for this
dramatic slow down? I'm not to sure how DiffArray are
implemented, but if it's how I would do it it seems you
would end up with a massive chain of indirections.

Actually, it's probably not a good idea to have a
DiffArray as a top level CAF, cyclic or otherwise?

Hmm.. I think that must be it.

On Monday 27 Oct 2003 6:21 pm, Adrian Hey wrote:
> > -- Search Tree data type
> > newtype STree = STree (Array Int (STree,[Match]))
> > -- Initial value for Search Tree
> > sTree0 :: STree
> > sTree0 = STree (array (0,9) [(n,(sTree0,[]))| n <- [0..9]])



> The code is otherwise identical, so any difference in execution time
> must be caused the difference between reading/writing the respective
> arrays. I wouldn't expect them to be identical but for DiffArrays
> to be over 100 times slower seems a bit strange (especially for
> a relatively small array of 10 elements). That's an O(n^2) difference
> somewhere (neglecting any constant factors).

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: DiffArray Performance

2003-10-27 Thread Adrian Hey
On Monday 27 Oct 2003 3:17 pm, Alastair Reid wrote:
> You don't mention _how_ you use diffarrays so I'll guess you're keeping the
> array in ascending order, finding the insertion point by binary search and
> inserting by copying bigger elements one place to the right.

> Using binary trees (the code you copied), the cost of N insertions will be
> O(N * log N) assuming the input is in random order.

Thanks for your reply. Maybe I'm being dense, but I'm affraid I don't
understand the relevance of your answer to my problem :-(

I guess I should have been a bit clearer in my explanation of what the
code is doing. Here's the code again..

> -- Search Tree data type
> newtype STree = STree (Array Int (STree,[Match]))
> -- Initial value for Search Tree
> sTree0 :: STree
> sTree0 = STree (array (0,9) [(n,(sTree0,[]))| n <- [0..9]])
>
> -- Make the search tree from a list of words
> makeSTree :: [String] -> STree
> makeSTree ws = foldl' putWord sTree0 pairs where
> pairs = [let ps = packString w in ps `seq` (word2keys w, MatchW ps) | w<-ws]
> word2keys cs = [getKey (toUpper c) | c <- cs, c /= '"' , c /= '-' ]
> putWord stree (keys,m) = put keys stree
> where put [] _ = error "makeSTree: empty Keys"
>   put [k](STree a) = let (t,ms) = a ! k
>  in STree (a // [(k,(t,m:ms))])
>   put (k:ks) (STree a) = let (t,ms) = a ! k
>  t' = put ks t
>  in t' `seq` STree (a // [(k,(t',ms))])

This generates a 10 way (Decary?) search tree, each vertex of which is an
ordinary Haskell98 Array of 10 elements. Each edge is annotated by any
string which encode a particular sequence of decimal digits (keys), which
give the path to the vertex in question.

To find any words which encode a particular sequence of digits the
corresponding search function (not included above) just looks down the
relevant branches (1 for each digit), until no more digits are left.
Whatever annotations are on the last edge are the words that encode
the digit sequence (if any). The search is very fast, it's the
construction of the search tree in the first place that seems a bit
slow (extremely slow using DiffArrays).

The test used DiffArray instead of a Haskell98 array..

> -- Search Tree data type
> newtype STree = STree (DiffArray Int (STree,[Match]))

The code is otherwise identical, so any difference in execution time
must be caused the difference between reading/writing the respective
arrays. I wouldn't expect them to be identical but for DiffArrays
to be over 100 times slower seems a bit strange (especially for
a relatively small array of 10 elements). That's an O(n^2) difference
somewhere (neglecting any constant factors).

I'm just wondering if there's a bug in their implementation?
or am I using them incorrectly? and generally seeking advice
about any faster arrays I could try.

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


DiffArray Performance

2003-10-27 Thread Adrian Hey
Hello,

I've been trying to optimise the following code..

-- Search Tree data type
newtype STree = STree (Array Int (STree,[Match]))
-- Initial value for Search Tree
sTree0 :: STree
sTree0 = STree (array (0,9) [(n,(sTree0,[]))| n <- [0..9]])

-- Make the search tree from a list of words
makeSTree :: [String] -> STree
makeSTree ws = foldl' putWord sTree0 pairs where
  pairs = [let ps = packString w in ps `seq` (word2keys w, MatchW ps) | w<-ws]
  word2keys cs = [getKey (toUpper c) | c <- cs, c /= '"' , c /= '-' ]
  putWord stree (keys,m) = put keys stree
where put [] _ = error "makeSTree: empty Keys"
  put [k](STree a) = let (t,ms) = a ! k
 in STree (a // [(k,(t,m:ms))])
  put (k:ks) (STree a) = let (t,ms) = a ! k
 t' = put ks t 
 in t' `seq` STree (a // [(k,(t',ms))])

This seems to be taking about 4.8 seconds (of 5.1 seconds total
program execution time) for the input I'm using. I thought using
DiffArrays might be faster, but no such luck. Execution time rose
to 9.5 *minutes*.

Is this what I should expect to see?

I'm using ghc 6.0, BTW.

Thanks
--
Adrian Hey






___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Effect of large binaries on garbage collection

2003-03-15 Thread Adrian Hey
On Wednesday 12 March 2003 00:49, Manuel M T Chakravarty wrote:
> "Simon Peyton-Jones" <[EMAIL PROTECTED]> wrote,
>
> > | In the current CVS GHC, undoubtedly the right thing to use is
> > | Foreign.mallocForeignPtr.  Internally these are implemented as
> > | MutableByteArray#, so you get fast allocation and GC, but from the
> > | programmer's point of view it's a normal ForeignPtr.
> >
> > I wonder how it is for a random FFI user to discover this.  Does some
> > advice belong in the FFI spec, or in GHC's FFI chapter?
>
> What is needed is a FFI tutorial (in addition to the spec);
> similar to how we have the Gentle Introduction to complement
> the Report.

Thank's for the advice from everybody. Actually, on the whole I think
the FFI spec is pretty easy to understand, but tutorial would be nice
too :-) There are obviously subtleties about it which I had not
understood properly.

Regards
--
Adrian Hey 
  
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Effect of large binaries on garbage collection

2003-03-10 Thread Adrian Hey
On Thursday 06 March 2003 10:55, Adrian Hey wrote:
> On Tuesday 04 March 2003 12:36, Simon Peyton-Jones wrote:
> > GHC does not copy big objects, so don't worry about the copying cost.
> > (Instead of copying, it allocates big objects to (a contiguous series
> > of) heap blocks, with no other objects in those blocks.  Then the object
> > can "move" simply by swizzling the heap-block descriptor.)
>
> Thanks, looks like it's option (1) then. Could you tell me what
> Haskell type I have to use be able to pass a pointer to this binary
> to C land (genuine address, not StablePtr). I don't think
> the standard FFI allows this at all but, IIRC, the old ghc libraries
> allowed you to do this with a mutable byte array.

Does anybody know the answer to this? please.. pretty please..:-)

Sorry if this is a case of me failing to RTFM, but I don't
think it is. Paragraph 8.1.1 of the users guide says..

 "The types ByteArray and MutableByteArray may be used as basic
  foreign types (see FFI Addendum, Section 3.2). In C land,
  they map to (char *)."

I can't find any way in the Base libs (or what's left of the old
libs) to create a ByteArray or MutableByteArray, which leads me to
suspect that they no longer exist.

Should I use something else instead?

Thanks
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Effect of large binaries on garbage collection

2003-03-06 Thread Adrian Hey
On Tuesday 04 March 2003 12:36, Simon Peyton-Jones wrote:
> GHC does not copy big objects, so don't worry about the copying cost.
> (Instead of copying, it allocates big objects to (a contiguous series
> of) heap blocks, with no other objects in those blocks.  Then the object
> can "move" simply by swizzling the heap-block descriptor.)

Thanks, looks like it's option (1) then. Could you tell me what
Haskell type I have to use be able to pass a pointer to this binary
to C land (genuine address, not StablePtr). I don't think
the standard FFI allows this at all but, IIRC, the old ghc libraries
allowed you to do this with a mutable byte array.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Effect of large binaries on garbage collection

2003-03-04 Thread Adrian Hey
Hello,

I'm writing a library which will require many blocks of binary data
of various sizes (some very large) to be stored in the heap. I'm a
little worried about the effect this will have on the efficiency of
garbage collection. I'm not sure how ghc gc works these days, but
I seem to remember it's a copying collector by default. If so it seems
a pity to waste time copying 10's of MBytes of binaries at each
collection.

The options I'm considering are..

(1) Use Haskell heap space
Pros: Easy for me
Cons: May slow down gc
  AFAICS I can't use anything like realloc
  Current FFI proposals seem to prevent me from directly
  accessing Haskell heap objects from C land (or have I
  misunderstood?).

(2) Use C heap space
Pros: Easy(ish) to use from C and Haskell ffi
Cons: Unless C heaps have improved a lot since I last looked
  (which I doubt), it seems likely I will suffer from slow
  allocation and fragmentation problems. 

(3) Write my own "sliding" heap manager and use finalisers for
   garbage collection.
   Pros: Can tailor it to work exactly the way I want.
   Cons: More work for me, especially if I want the
 result to be portable across OS's. 
 Might be a complete waste of time if my worries
 about ghc heap management are groundless :-)

Any advice?

Thanks
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: O'Haskell

2003-02-11 Thread Adrian Hey
On Friday 07 February 2003 10:32, Steffen Mazanek wrote:
> Hello.
>
> I am toying with the idea of implementing the OHaskell-concepts
> of Johan Nordlander in the GHC (as a diploma thesis).
> Simon Marlow adviced me to do some "market research" in this
> group and here we go.
> I am interested in all kinds of comments, advices, scrupulosites...

I would be interested in trying this out if you're offering to
produce it:-) But I fear that forking ghc to produce a non-standard
compiler which may well suffer "bit rot" in the end won't attract
many users. Nobody is going to invest a lot of time writing code
in a language for which there is only 1 compiler (which will probably
not be maintained in the long term). Perhaps instead you could do much
of what you want with some kind of pre-processor which may be easier
to maintain.

O'Haskell doesn't seem to have attracted much attention from Haskeller's
and I suspect that one reason for this is that the only implementation
is a modified obsolete version of Hugs (1.3 ?). It would be a pity if
your version suffered the same fate.

Regards
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Linking with object files

2003-01-08 Thread Adrian Hey
On Wednesday 08 January 2003  5:00 pm, Simon Marlow wrote:
> What command line are you using?  Here's what I did:
>
> ~/scratch > cat  >foo.c
> ~/scratch > gcc -c foo.c
> ~/scratch > ghc --make hello.hs foo.o
> ghc-5.04.2: chasing modules from: hello.hs
> Skipping  Main ( hello.hs, ./hello.o )
> ghc: linking ...
> ~/scratch >

The exact command line I'm using is..
 ghc --make -fglasgow-exts -Wall -o Main.exe Main.hs Fill.o Render.o
which gives..
 ghc-5.04.2: chasing modules from: Main.hs,Fill.o,Render.o
 ghc-5.04.2: can't find module `Fill.o' (while processing "Fill.o")

But playing about a bit, I found the solution. It doesn't like
upper case object file names. Not sure if that's by design or an
oversight. I've changed them to lower case and it works fine now.

Regards
--
Adrian Hey 

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Linking with object files

2003-01-08 Thread Adrian Hey
Thanks for your answers

On Wednesday 08 January 2003 10:04 am, Simon Marlow wrote:
> > I get the error.. can't find module 'foo.o' 
>
> I think you must be using a version of GHC prior to 5.04.2.  This
> functionality was fixed in 5.04.2.

I just checked, it really is (or claims to be:-) version 5.04.2.
I get the same error using 5.04.2 under Linux too.

> > The second question is what object formats does ghc
> > understand and is the
> > object file suffix significant? If I try elf format, this is
> > accepted without
> > complaint but I get a broken executable (though this could
> > well be because
> > my assembler has generated a broken elf file). Using coff
> > format seems OK.
> > The files have a ".o"  suffix in both cases.
> >
> > FWIW, I'm using ghc 5.04.2 on Win32 with the nasm assembler.
>
> I'm not sure about this one - any Windows experts out there like to
> comment?
>
> My guess is that GHCi should load COFF objects only, but other kinds of
> objects might work when using the system linker (i.e. in batch mode).

I've now tried a variety of formats on both win32 and Linux, the answer
seems to be:

On Linux using ld version 2.10.91
-
aout format works.

coff,as86,obj,win32,rdf formats all give the error..
foo.o: file not recognized: File format not recognized

elf format gives the error..
/usr/bin/ld: foo.o: invalid section symbol index 0xfff1 (*ABS*) ingored
It does indeed "ingore" the error and go on to produce an executable, but
this time it works.


On Win32 using ld version 2.11.90 (as shipped with ghc)
---
coff and win32 formats work.

as86,obj,rdf formats all give the error..
foo.o: file not recognized: File format not recognized

elf format gives no error, but the executable is broken

This was all with .o suffix, dunno if changing it to something else
might get rid some of these errors. 

Don't know if any of this is any help. It seems there's something
dodgy with elf format, either in nasm or ld (not sure which).

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Linking with object files

2003-01-06 Thread Adrian Hey
Hello,

I seem to be having some trouble doing this and have a couple of questions..

The first question is how do you use --make option when doing this?
Section 4.5 of the users guide seems to contradictory to me.

It states..
 "The command line must contain one source file or module name"

and later..
 "If the program needs to be linked with additional objects (say, some
  auxilliary C code), these can be specified on the command line as usual."

> ghc Main.hs foo.o
is OK, but whenever I try this...

> ghc --make Main.hs foo.o
I get the error.. can't find module 'foo.o' 

The second question is what object formats does ghc understand and is the
object file suffix significant? If I try elf format, this is accepted without
complaint but I get a broken executable (though this could well be because
my assembler has generated a broken elf file). Using coff format seems OK.
The files have a ".o"  suffix in both cases.

FWIW, I'm using ghc 5.04.2 on Win32 with the nasm assembler.

Thanks
--
Adrian Hey  

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Re : Extensible records in Haskell

2002-11-07 Thread Adrian Hey
On Wednesday 06 November 2002 10:48 pm, Nicolas Oury wrote:
> I am going to try to persuade you:
>
> * first of all, it seems to be needed in order to make "first class
> modules"  (cf your paper) . And I think that a true module system would
> be useful. But I may be wrong.
>
> * As far as I am concerned, in fact, I need it to do something on the
> typing of problems like database queries, where the naming is quite
> concerning. I think for example, HaskellDB (don't know if it was this
> actually the name) was doing something like this.
>
> * It would be used : it is easy to understand, safe and avoid to rename
> with different names some fields that should have the same name.
>
> * ...
>
> I could try find other reasons tomorrow.

I'll second this request.

I would also like a better records and/or first class modules system
with extensibility and sub-typing or row polymorphism (not sure which
is best or most feasible).

I would also like to be able to use field names properly with
extistentials. (Hmm, I suspect having existentials and extentsibility
is difficult?)

Also, is there some good technical reason why we can't allow punning?

My wish list anyway.

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Type of newForeignPtr & addForeignPtrFinalizer

2002-07-23 Thread Adrian Hey

On Monday 22 July 2002 12:33 pm, Simon Marlow wrote:
> > The second seems to require this bit of weirdness..
> >  myNewForeignPtr :: Ptr a -> (ForeignPtr a -> IO ()) -> IO
> > (ForeignPtr a)
> >  myNewForeignPtr p fin = do
> >newfp  <- newForeignPtr p (return ())
> >addForeignPtrFinalizer newfp (fin newfp)
> >return newfp
>
> You can do this more easily using fixIO:
>
>myNewForeignPtr p fin = do
>   fixIO (\fp -> newForeignPtr p (fin fp))

Thanks, neat (I think:-). I wonder if I might indulge myself with
another stupid question related to this, that is, why make the
distinction between Ptr and ForeignPtr at all?

By definition a ForeignPtr has a non-zero number of finalisers
and Ptr has no finalisers. Couldn't you just allow ForeignPtr's
to have no finalisers and dispense with Ptr alltogether? Then
you could just add finalisers as required rather than converting
between types. It seems that when making a foreign binding you have
to make what seems (to me) an arbitrary choice between Ptr and
ForeignPtr arguments. I don't really understand the reason for
this extra complexity.

Regards
--
Adrian Hey
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Type of newForeignPtr & addForeignPtrFinalizer

2002-07-07 Thread Adrian Hey

Hello,

I have a feeling this may be a stupid question, but why are the
types of these..
 newForeignPtr  :: Ptr a -> IO () -> IO (ForeignPtr a)
 addForeignPtrFinalizer :: ForeignPtr a -> IO () -> IO ()
(second arg being the finalizer)

Won't a finaliser almost always take a pointer to the thing being
finalised as an argument? If so it would be more convienient
to have newForeignPtr..
 newForeignPtr :: Ptr a -> (Ptr a -> IO ()) -> IO (ForeignPtr a)
or maybe..
 newForeignPtr :: Ptr a -> (ForeignPtr a -> IO ()) -> IO (ForeignPtr a)

and..
 addForeignPtrFinalizer :: ForeignPtr a -> (ForeignPtr a -> IO ()) -> IO ()


The first of these is easy to implement yourself I suppose..
 myNewForeignPtr :: Ptr a -> (Ptr a -> IO ()) -> IO (ForeignPtr a)
 myNewForeignPtr p fin = newForeignPtr p (fin p)

The second seems to require this bit of weirdness..
 myNewForeignPtr :: Ptr a -> (ForeignPtr a -> IO ()) -> IO (ForeignPtr a)
 myNewForeignPtr p fin = do
   newfp  <- newForeignPtr p (return ())
   addForeignPtrFinalizer newfp (fin newfp)
   return newfp

Unless I'm missing something, you have to use a pointless dummy
finaliser (return ()) to get a ForeignPtr to use as the argument
of the real finaliser.
   
The reason I ask is I've been trying to use C2HS recently to
produce a Haskell binding to GNU plot library, and have had
to do something very similar. The relevant bits of .chs file
being..

-- Haskell: newtype Plotter = Plotter (ForeignPtr Plotter)
{#pointer *plPlotter   as Plotter   foreign newtype#}

-- Haskell: newtype PlotterParams = PlotterParams (ForeignPtr PlotterParams)
{#pointer *plPlotterParams as PlotterParams foreign newtype#}

-- Destructor for the plPlotter type.
-- C proto: int pl_deletepl_r (plPlotter *plotter);
-- Haskell: deletePlotter :: Plotter -> IO ()
{#fun unsafe pl_deletepl_r as deletePlotter
{id `Plotter'} -> `()'  -- id as marshaller??
#}

-- Create a new X Plotter
-- C proto: plPlotter* new_x_plotter (plPlotterParams* plotter_params);
-- Haskell: newXPlotter   :: PlotterParams -> IO Plotter
--  newXPLotter'_ :: PlotterParams -> IO (Ptr Plotter)
{#fun unsafe new_x_plotter as newXPlotter
{id `PlotterParams'} -> `Plotter' marshalPlotter *
#}
marshalPlotter :: Ptr Plotter -> IO Plotter
marshalPlotter p = do
 foreignPlotter <- newForeignPtr p (return ())
 let plotter = Plotter foreignPlotter
 addForeignPtrFinalizer foreignPlotter (deletePlotter plotter)
 return plotter

I suppose the thing that's worrying me most is will this work?
(I don't have enough done yet to find out). It seems overly
complicated to me, but I can't see any alternative to this
complexity. Am I using C2HS right? Have I missed something
obvious?

Thanks
--
Adrian Hey









 

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Can anyone get at c2hs in cvs?

2002-07-02 Thread Adrian Hey

Hello,

Sorry if this is slightly off topic, but I don't seem to be able to
get at c2hs and I wonder if anyone here has had the same problem or
knows of a fix.

The instructions here..
 http://www.cse.unsw.edu.au/~chak/haskell/c2hs/

say to login using..
 cvs -d :pserver:[EMAIL PROTECTED]:/home/chakcvs/cvs login

then enter password (anonymous).

When I do this all I get back is..
cvs [login aborted]: connect to ceres.cse.unsw.edu.au:2401 failed: Connection 
refused

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Questions about sharing

2001-12-07 Thread Adrian Hey

On Friday 07 December 2001  2:27 pm, D. Tweed wrote:
> On Fri, 7 Dec 2001, Adrian Hey wrote:
> > The first is..
> > Does the compiler keep a unique copy of expressions which consist of just
> > a single zero arity constructor (eg. [],True,Nothing..) as a CAF which is
> > referenced each time the constructor appears in an expression, or does it
> > duplicate the constructor (expression) each time it's used.
> > Maybe I should define my own CAF at the top level and use it instead?
> > (or perhaps they're unboxed somehow?)
>
> Really idle curiosity... why would having a single copy of a zero arity
> constructor be more efficient than have multiple copies? Wouldn't they fit
> into a `cell' which wouldn't be larger than the one that would
> (IIRC) be used for the indirection to the CAF? (I can understand a larger
> CAF being a win, but one this small?)

Well I suppose if it's necessary to create a new indirection heap record
for each reference, then there's not really any point in having a single
copy of the value itself. But I don't see why that should be so. Even
if it is indirected for some reason it should still be possible to
share the indirection record I think.

Maybe CAF is the wrong word to use here since there's no application
involved. What I mean is.. are zero arity constructors referenced the
same way as a top level constant would be? (Not that I know how
that's done, but I presume there's only 1 copy of top level
constants in memory at any one time.)

I was thinking of data types like binary trees. If the tree was
balanced, and a new copy of a leaf (zero arity empty tree) was
constructed on the heap every time a function returned this value,
then this would double the No. of heap records associated with the
tree. This would waste memory and slow down garbage collection.

I just wanted to make sure I don't need to use any weird
programming style to ensure this doesn't happen.

Regards
-- 
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Questions about sharing

2001-12-07 Thread Adrian Hey

On Friday 07 December 2001  3:09 pm, Simon Marlow wrote:
> > The second is..
> > If, following pattern matching, the matched pattern appears in an
> > expression, is it shared or duplicated..
> > e.g. (a:as) -> f (a:as)
> > Maybe I should write something like this..
> >  x@(a:as) -> f x
> > (I know that sometimes the type checker won't allow you to do this)
>
> Yes, GHC will common these up (with -O).  I'm not sure I understand the
> point about the type checker not letting you do this yourself, though:
> surely x and (a:as) have the same type?

In this particular case they do, but sometimes this isn't so. 
IE in.. case 
x@ ->  
the x has the same type as , which isn't
necessarily the same as would be inferred using..
x=
(here the pattern is really an expression of course)

A long long time ago I griped about this in a thread the main
haskell.org mailing list (called "Pattern match success changes
types" IIRC).

IMHO variables bound using 'as patterns' like..
x@ -> 
should be re-typed as if they had been written..
 -> let x= in   
in order to make sharing possible.

I don't think anybody agrees with me though :-)

Regards
-- 
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Questions about sharing

2001-12-07 Thread Adrian Hey

Hello,

I sometimes wonder just how far I should go in obfuscating code by
manually implementing trivial optimisations that should (IMHO) be
implemented by the compiler, but may not be.

So I have a couple of specific questions..

The first is..
Does the compiler keep a unique copy of expressions which consist of just
a single zero arity constructor (eg. [],True,Nothing..) as a CAF which is
referenced each time the constructor appears in an expression, or does it
duplicate the constructor (expression) each time it's used.
Maybe I should define my own CAF at the top level and use it instead?
(or perhaps they're unboxed somehow?)

The second is..
If, following pattern matching, the matched pattern appears in an
expression, is it shared or duplicated..
e.g. (a:as) -> f (a:as)
Maybe I should write something like this..
 x@(a:as) -> f x
(I know that sometimes the type checker won't allow you to do this)

Thanks
-- 
Adrian Hey


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Trying to build HDIRECT with ghc-5.00.2

2001-07-11 Thread Adrian Hey

*** ghc Newbie Alert **
I suppose the first question i should ask is will the current version 
(hdirect 0.17) work with ghc-5.00.2 or am I wasting my time?

I've found I have to put '-package lang' in the ghc options to get
the installation to to work at all, but it stops eventually complaining about 
unrecognised flag -K2m (obsolete presumably). Is there a ghc-5.00.2 
alternative? At this point I've given up because I don't really understand 
what I'm doing or what are appropriate settings for all those xyz_HC_OPTS in 
the makefiles or what else might need changing.
 
Any advice?

Thanks
--
Adrian Hey

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users