My apologies. I sent this to the wrong list.
H
___
Zope-Dev maillist - Zope-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zope-dev
** No cross posts or HTML encoding! **
(Related lists -
http://mail.zope.org/mailman/listinfo/zope-announce
Hi all
I run my script foo.zctl with zopectl run foo.ctl param1 param2.
This script operates on a large ZODB and catches ConflictErrors
accordingly. It iterates over a set, updates data and commits the
transaction every 100 iterations. But I've noticed two things:
1. ConflictErrors are never
Hi Tim
I'm more involved with Plone but can provide a slightly more
digestible answer :)
Chris mentioned unit tests. You do not have to write a new unit test.
What is required is to have a look at the tests and then identify one
that is relevant to your problematic method. This test should be
The usual Plone catalogs (portal_catalog, uid_catalog,
reference_catalog and membrane_tool) all run above 90% hit rate if the
server is up to it. portal_catalog is invalidated the most so it
fluctuates the most.
If the server is severely underpowered then catalogcache is much less
effective.
Have you measures the time needs for some standard ZCatalog queries
used with a Plone site with the communication overhead with memcached?
Generally spoken: I think the ZCatalog is in general fast. Queries using a
fulltext index are known to be more expensive or if you have to deal with
large
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily deliver never-committed data and
also it will not isolate transactions from each
In addition, you need to include a serial in your cache keys to avoid
dirty reads.
The cache invalidation code actively removes items from the cache. Am
I understanding you correctly?
H
___
Zope-Dev maillist - Zope-Dev@zope.org
On Sat, Oct 25, 2008 at 2:57 PM, Andreas Jung [EMAIL PROTECTED] wrote:
On 25.10.2008 14:53 Uhr, Hedley Roos wrote:
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly
If you catalog an object, then search for it and then abort the
transaction, your cache will have data in it that isn't committed.
Kind of like how I came to the same conclusion in parallel to you and
stuffed up this thread :)
Additionally when another transaction is already running in
Additionally when another transaction is already running in parallel, it
will see cache inserts from other transactions.
A possible solution is to keep a module level cache which can be
committed to the memcache on transaction boundaries. That way I'll
incur no performance penalty.
H
Hi all
The past few weeks I've been optimizing a busy Plone site and so
collective.catalogcache was born.
It uses memcached as a distributed ZCatalog cache. It is currently in
production and seems to be holding just fine. The site went from being
unusable to serving quite a bit of data.
I'll
11 matches
Mail list logo