Hi All,

This is with whatever ZODB ships with Zope 2.8.5...

I have a Stepper (zopectl run on steroids) job that deals with lots of big objects.

After processing each one, Stepper does a transaction.get().commit(). I thought this was enough to keep the object cache at a sane size, however the job kept bombing out with MemoryErrors, and sure enough it was using 2 or 3 gigs of memory when that happened.

I fiddled about with the gc module and found that, sure enough, object were being kept in memory. At a guess, I inserted something close to the following:

obj._p_jar.db().cacheMinimize()

...after each 5,000 objects were processed (there are 60,000 objects in total)

Lo and behold, memory usage became sane.

Why is this step necessary? I thought transaction.get().commit() every so often was enough to sort out the cache...

cheers,

Chris

--
Simplistix - Content Management, Zope & Python Consulting
           - http://www.simplistix.co.uk
_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to