On 2006-01-04, Krasimir Angelov <[EMAIL PROTECTED]> wrote: >> I also had extremely high memory usage when dealing with large result >> sets -- somewhere on the order of 700MB; the same consumes about 12MB >> with HDBC. My guess from looking briefly at the code is that the entire >> result set is being read into memory up front. > > I can't understand this. The result set isn't read in memory unless > you want do to it. If you are using collectRows then yes, you will end > with the entire result set read in memory, but you can use fetch to > read the set row by row as well. The forEachRow function is also > helpfull in this case.
After looking at the code again, it's possible that it's because you're never calling pqClear on the result set. So the results returned by PostgreSQL linger in memory forever. I did have memory issues with Sqlite3 as well, but a quick inspection isn't finding an obvious culprit. I use ForeignPtrs everywhere in HDBC to try to make sure that nothing like this happens, and also that The Right Thing happens if a database handle gets garbage collected without being explicitly closed first. There's a small C wrapper in each database driver to help ensure that nothing ever gets finalized more than once. -- John _______________________________________________ Haskell mailing list [email protected] http://www.haskell.org/mailman/listinfo/haskell
