Re: [Haskell] Realistic max size of GHC heap

2005-09-15 Thread Karl Grapone
On 9/15/05, Simon Marlow <[EMAIL PROTECTED]> wrote:
On 15 September 2005 01:04, Karl Grapone wrote:> I'm considering using haskell for a system that could, potentially,> need 5GB-10GB of live data.> My intention is to use GHC on Opteron boxes which will give me a max
> of 16GB-32GB of real ram.  I gather that GHC is close to being ported> to amd64.>> Is it a realistic goal to operate with a heap size this large in GHC?> The great majority of this data will be very long tenured, so I'm
> hoping that it'll be possible to configure the GC to not need to much> peak memory during the collection phase.It'll be a good stress test for the GC, at least.  
Ouch!  It scares me when people say that something will be a good stress test! :) 
There are no reasonsin principle why you can't have a heap this big, but major collections
are going to take a long time.  It sounds like in your case most of thisdata is effectively static, so in fact a major collection will be oflittle use.
You're correct, the system will gradually accrue permanent data. 
I forsee there being two distinct generations, a fairly constant sized
short-lived one, and a gradually increasing set of immortal allocations.
Response times will be critical, but hopefully the GC can be tweaked to a sweet spot.
 Generational collection tries to deal with this in an adaptive way:long-lived data gets traversed less and less often as the program runs,
as long as you have enough generations.  But if the programmer reallyknows that a large chunk of data is going to be live for a long time, itwould be interesting to see whether this information could be fed back
in a way that the GC can take advantage of it.  I'm sure there must beexisting techniques for this sort of thing. Well,
I would naively say I only need two, maybe three, generations, as any
memory that has been around for more than a matter of a couple of hours
is definitely going to be around until system shutdown.  But I'm
completely new to haskell and I don't know if that holds for a lazy
language.  My hope was that laziness would allow for better
response times but it certainly seems to muddy the GC waters.
I'd like to recommend haskell, but I just don't know enough to be comfortable yet... more research methinks.

Thanks for your responses.
Karl
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread Frederik Eaton
> I have another proposal, though. Introduce a new keyword, which I'll
> call "borrow" (the opposite of "return"), that behaves like a
> function of type (Monad m) => m a -> a inside of do statements. More
> precisely, a do expression of the form
> 
>  do { ... ; ... borrow E ... ; ... }
> 
> is transformed into
> 
>  do { ... ; x <- E ; ... x ... ; ... }
>
> where x is a fresh variable. If more than one borrow form appears in
> the same do statement, they are pulled out from left to right, which
> matches the convention already used in liftM2, ap, mapM, etc.

I think this is a good idea. I like the inline "<-", or maybe
something like "@".

I'm not sure what you intend to do about nested "do" statements,
though. If they correspond to different monads, I might want to have a
'borrow' in the inner "do" statement create a lifted expression in the
outer "do" statement. Furthermore, I might want to have a lifted
expression in the outer "do" create something which needs to be
evaluated again in the monad of the inner "do" to produce the final
value.

In any case, it would certainly be good to have better support for
lifting; and something which doesn't weaken the type system is likely
to be implemented before something that does is, so I am in favor of
investigation along the lines of your proposal.

Frederik

-- 
http://ofb.net/~frederik/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread Einar Karttunen
On 15.09 23:40, Bulat Ziganshin wrote:
> of course
> 
> class Ref c a where
>   new :: a -> IO (c a)
>   get :: c a -> IO a
>   set :: c a -> a -> IO ()

Maybe even:

class Ref m t where
  new :: a -> m (t a)
  get :: t a -> m a
  set :: t a -> a -> m ()

Or if you want to support things like FastMutInts

class Ref m t v where
  new :: v -> m t
  get :: t -> v -> m a
  set :: t -> v -> m ()

That would even support an evil IOArray instance:

instance Ref IO (IOArray Int v, Int) v where
  new iv= newArray_ (0,iv)
  get (arr,idx) = readArray arr idx
  set (arr,idx) val = writeArray arr idx val

- Einar Karttunen
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread robert dockins

I raise you:

class (Monad m) => Ref m c | c -> m
  where new  :: a -> m (c a)
get  :: c a -> m a
peek :: c a -> m a
set  :: c a -> a -> m ()
modify   :: c a -> (a -> a) -> m a
modify_  :: c a -> (a -> a) -> m ()
modifyM  :: c a -> (a -> m a) -> m a
modifyM_ :: c a -> (a -> m a) -> m ()



Bulat Ziganshin wrote:


Hello Lyle,

Thursday, September 15, 2005, 10:50:30 PM, you wrote:



 z := *x + *y   -- translated to { x' <- readIORef x; y' <- readIORef y; 
writeIORef z (x'+y') }



LK> Right, I realize my suggestion is the same as Ben's.  I just prefer a 
LK> more succinct notation, like special brackets instead of a keyword.  I 
LK> like your idea about IORefs.  I think it should work as well for 
LK> STRefs... perhaps it needs to belong to a type class, in a way?


of course

class Ref c a where
  new :: a -> IO (c a)
  get :: c a -> IO a
  set :: c a -> a -> IO ()





___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re[2]: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread Bulat Ziganshin
Hello Lyle,

Thursday, September 15, 2005, 10:50:30 PM, you wrote:

>>   z := *x + *y   -- translated to { x' <- readIORef x; y' <- readIORef y; 
>> writeIORef z (x'+y') }
>>
LK> Right, I realize my suggestion is the same as Ben's.  I just prefer a 
LK> more succinct notation, like special brackets instead of a keyword.  I 
LK> like your idea about IORefs.  I think it should work as well for 
LK> STRefs... perhaps it needs to belong to a type class, in a way?

of course

class Ref c a where
  new :: a -> IO (c a)
  get :: c a -> IO a
  set :: c a -> a -> IO ()



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread Lyle Kopnicky

Bulat Ziganshin wrote:


Hello Ben,

Wednesday, September 14, 2005, 6:32:27 PM, you wrote:

BRG>  do { ... ; ... borrow E ... ; ... }

BRG> is transformed into

BRG>  do { ... ; x <- E ; ... x ... ; ... }

i strongly support this suggestion. actually, i suggest the same for
dealing with references (IORef/MVar/...), for example:

do x <- newIORef 0
  y <- newIORef 0
  z <- newIORef 0
  z := *x + *y   -- translated to { x' <- readIORef x; y' <- readIORef y; 
writeIORef z (x'+y') }

 

Right, I realize my suggestion is the same as Ben's.  I just prefer a 
more succinct notation, like special brackets instead of a keyword.  I 
like your idea about IORefs.  I think it should work as well for 
STRefs... perhaps it needs to belong to a type class, in a way?


- Lyle
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re[2]: [Haskell] Mixing monadic and non-monadic functions

2005-09-15 Thread Bulat Ziganshin
Hello Ben,

Wednesday, September 14, 2005, 6:32:27 PM, you wrote:

BRG>  do { ... ; ... borrow E ... ; ... }

BRG> is transformed into

BRG>  do { ... ; x <- E ; ... x ... ; ... }

i strongly support this suggestion. actually, i suggest the same for
dealing with references (IORef/MVar/...), for example:

do x <- newIORef 0
   y <- newIORef 0
   z <- newIORef 0
   z := *x + *y   -- translated to { x' <- readIORef x; y' <- readIORef y; 
writeIORef z (x'+y') }



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re[2]: [Haskell] Realistic max size of GHC heap

2005-09-15 Thread Bulat Ziganshin
Hello Simon,

Thursday, September 15, 2005, 2:42:44 PM, you wrote:

>> of 16GB-32GB of real ram.  I gather that GHC is close to being ported

SM> It'll be a good stress test for the GC, at least.  There are no reasons
SM> in principle why you can't have a heap this big, but major collections
SM> are going to take a long time.  It sounds like in your case most of this
SM> data is effectively static, so in fact a major collection will be of
SM> little use.  

may be he must use more than 2 generations?


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Realistic max size of GHC heap

2005-09-15 Thread Simon Marlow
On 15 September 2005 14:48, S. Alexander Jacobson wrote:

> Should one interpret this as GHC now targets 64-bit systems or does
> one need to employ some sort of clevernesss to use this much memory?
> (I posted this question a while ago and was told that GHC did not at
> that time support 64-bit so could not use that much memory)

GHC has supported 64 bit systems for quite some time, eg. we have had
Alpha support for many years.  Recently we've added amd64 support, which
will be fully supported in the upcoming 6.4.1 release.  Perhaps you're
thinking of nhc98 - if I recall correctly it isn't ported to 64 bit yet.

> On a related note, does GHC now distribute IO threads over multiple
> CPUs or is it still a 1 CPU system?

GHC has distributed IO threads over multiple CPUs for quite some time
:-)  You need to use the -threaded option when compiling your program.

Cheers,
Simon
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Realistic max size of GHC heap

2005-09-15 Thread S. Alexander Jacobson
Should one interpret this as GHC now targets 64-bit systems or does 
one need to employ some sort of clevernesss to use this much memory?
(I posted this question a while ago and was told that GHC did not at 
that time support 64-bit so could not use that much memory)


On a related note, does GHC now distribute IO threads over multiple 
CPUs or is it still a 1 CPU system?


-Alex-

__
S. Alexander Jacobson tel:917-770-6565 http://alexjacobson.com




On Thu, 15 Sep 2005, Simon Marlow wrote:


On 15 September 2005 01:04, Karl Grapone wrote:


I'm considering using haskell for a system that could, potentially,
need 5GB-10GB of live data.
My intention is to use GHC on Opteron boxes which will give me a max
of 16GB-32GB of real ram.  I gather that GHC is close to being ported
to amd64.

Is it a realistic goal to operate with a heap size this large in GHC?
The great majority of this data will be very long tenured, so I'm
hoping that it'll be possible to configure the GC to not need to much
peak memory during the collection phase.


It'll be a good stress test for the GC, at least.  There are no reasons
in principle why you can't have a heap this big, but major collections
are going to take a long time.  It sounds like in your case most of this
data is effectively static, so in fact a major collection will be of
little use.

Generational collection tries to deal with this in an adaptive way:
long-lived data gets traversed less and less often as the program runs,
as long as you have enough generations.  But if the programmer really
knows that a large chunk of data is going to be live for a long time, it
would be interesting to see whether this information could be fed back
in a way that the GC can take advantage of it.  I'm sure there must be
existing techniques for this sort of thing.

Cheers,
Simon
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] how to cite the (revised) Haskell Report

2005-09-15 Thread Wolfgang Jeltsch
Am Donnerstag, 15. September 2005 08:33 schrieb Ben Horsfall:
> On 15/09/05, Wolfgang Jeltsch <[EMAIL PROTECTED]> wrote:
> > Page 0?  What is that?
>
> OK, I admit there is no page zero (although the JFP contents page
> gives pages 0--255, which is where the information in my bibtex file
> must have come from); you've forced me to look at the journal version
> again - I usually use the HTML. In place of page zero there should be
> pages i--xii, which contain the report's table of contents and
> preface. Thus pages = {i--xii,1--255}. Also, I give author = {... and
> others} rather than editor only because bibtex will not display an
> editor in an @article citation.
>
> Ben

Thank you very much.

Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


RE: [Haskell] Realistic max size of GHC heap

2005-09-15 Thread Simon Marlow
On 15 September 2005 01:04, Karl Grapone wrote:

> I'm considering using haskell for a system that could, potentially,
> need 5GB-10GB of live data.
> My intention is to use GHC on Opteron boxes which will give me a max
> of 16GB-32GB of real ram.  I gather that GHC is close to being ported
> to amd64.
> 
> Is it a realistic goal to operate with a heap size this large in GHC?
> The great majority of this data will be very long tenured, so I'm
> hoping that it'll be possible to configure the GC to not need to much
> peak memory during the collection phase.

It'll be a good stress test for the GC, at least.  There are no reasons
in principle why you can't have a heap this big, but major collections
are going to take a long time.  It sounds like in your case most of this
data is effectively static, so in fact a major collection will be of
little use.  

Generational collection tries to deal with this in an adaptive way:
long-lived data gets traversed less and less often as the program runs,
as long as you have enough generations.  But if the programmer really
knows that a large chunk of data is going to be live for a long time, it
would be interesting to see whether this information could be fed back
in a way that the GC can take advantage of it.  I'm sure there must be
existing techniques for this sort of thing.

Cheers,
Simon
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell