The main two use cases I have in mind are 1) really really really big abstract syntax trees or proof trees (a la compilers or theorem provers) 2) random access to numerical data
does that help clarify what i'm asking about? In each case, is there a standard way of dealing with this? in the case of (1) the sensible way seems to me to do some sort of zipper representation which loads adjacent nodes to some depth surrounding your current location in the tree, and in the case of (2) it seems like the sensible way would be load subsequences of the data into memory. ----- Original Message ---- From: Andrew Coppin <[EMAIL PROTECTED]> To: haskell-cafe@haskell.org Sent: Monday, August 13, 2007 3:55:57 PM Subject: Re: [Haskell-cafe] out of core computing in haskell Carter T Schonwald wrote: > Hello Everyone, > I'm not quite sure if I'm posing this question correctly, but what > facilities currently exist in haskell to nicely deal with > datastructures that won't fit within a given machine's ram? > And if there are no such facilities, what would it take to fix that? If you just want to process a big chunk of data from one end to the other without loading it all into RAM at once... that's fairly easy. Haskell is a lazy language. By playing with functions such as getContents, you can automatically load the data as you access it. No tricky programming required. If you're asking for something more specific -- e.g., "how do I talk to a standard database like Oracle / MySql / et al.", there are a couple of libraries for that. (Unfortunately, no one standard one.) See Stefan's answer. I'd you'd like to be more specific about what you'd like to do... _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe