Matthew McCawley wrote:


Mike Matrigali wrote:
I have reported DERBY-5624 to track this issue. I think I understand the problem, but would feel much better with a reproducible test case
I could run.  Feel free to add your information to DERBY-5624.


In our case, we just have a single table with about 5 million rows of
essentially junk data. I delete some portion of the data that's older than
some margin (half, single day's worth, etc.) and run compression.

I also tried another table that's about twice as big, but it required an 8
MB stack size. I've run out of heap space a few times as well, but I'm still
working on reproducing it.
at this point could you move the discussion to (just add comments I don't think you have to be authorized to do so):
https://issues.apache.org/jira/browse/DERBY-5624

Reply via email to