Berk, Murat wrote:
> We use 'spans' and remove them in one operation and also do not > commmit anything until we finish a pass over all rows. > > But main trick is blocked views, which uses smaller footprint on > commits. Murat
Yeah, I am using blocked views as well, but after checking the code, I was commiting after every delete! Ouch! I'm switching over to deleting spans so it should work a lot better.
The memory usage of individual deletes, especially across blocked views, is most probably due to MK allocating 4 Kb buffer chunks in every column a change is made (and sometimes much more to hold modified copies of ranges of data). With blocked views, I suspect that memory usage could indeed rise to a multiple of the dataset. A blocked view with say 5 columns and 40000 rows, could have 5 x 40 = 200 blocks, i.e. 800 Kb of sparsely filled buffers pending until flushed by a commit or rollback.
The fix for this would be to track the total set of buffers, and start coalescing some in-memory data buffers to free some of that up (and to do so well before actual commits).
I'm surprised that memory usage stays high across commits though, and even more by what looks like a 32-bit sign overflow in file positions getting through undetected and messing up a datafile. The 2 Gb limit should lead to commit failures, not file damage!
-jcw
_____________________________________________ Metakit mailing list - [EMAIL PROTECTED] http://www.equi4.com/mailman/listinfo/metakit
