The usual Plone catalogs (portal_catalog, uid_catalog,
reference_catalog and membrane_tool) all run above 90% hit rate if the
server is up to it. portal_catalog is invalidated the most so it
fluctuates the most.
If the server is severely underpowered then catalogcache is much less
effective.
On 25.10.2008 8:48 Uhr, Hedley Roos wrote:
The usual Plone catalogs (portal_catalog, uid_catalog,
reference_catalog and membrane_tool) all run above 90% hit rate if the
server is up to it. portal_catalog is invalidated the most so it
fluctuates the most.
If the server is severely underpowered
Have you measures the time needs for some standard ZCatalog queries
used with a Plone site with the communication overhead with memcached?
Generally spoken: I think the ZCatalog is in general fast. Queries using a
fulltext index are known to be more expensive or if you have to deal with
large
Summary of messages to the zope-tests list.
Period Fri Oct 24 11:00:00 2008 UTC to Sat Oct 25 11:00:00 2008 UTC.
There were 5 messages: 5 from Zope Tests.
Tests passed OK
---
Subject: OK : Zope-2.8 Python-2.3.6 : Linux
From: Zope Tests
Date: Fri Oct 24 20:56:52 EDT 2008
URL:
Hedley Roos wrote:
As for standard queries on a Plone site the typical folder contents
query is a good example. The query will be fast unless it sorts on
sortable_title (a ZCTextIndex) right? Not sure right now.
sortable_title is a field index and shouldn't be slower than any other
index.
Hi,
On Fri, 2008-10-24 at 15:41 +0200, Hedley Roos wrote:
The product is a monkey patch to Catalog.py. I'd love some feedback and
suggestions.
I'd love if this wouldn't be a monkey patch.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily deliver never-committed data and
also it will not isolate transactions from each
In addition, you need to include a serial in your cache keys to avoid
dirty reads.
The cache invalidation code actively removes items from the cache. Am
I understanding you correctly?
H
___
Zope-Dev maillist - Zope-Dev@zope.org
On 25.10.2008 14:53 Uhr, Hedley Roos wrote:
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily deliver never-committed data and
also it
On Sat, 2008-10-25 at 14:53 +0200, Hedley Roos wrote:
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
transactions. Your cache will happily deliver never-committed
On Sat, Oct 25, 2008 at 2:57 PM, Andreas Jung [EMAIL PROTECTED] wrote:
On 25.10.2008 14:53 Uhr, Hedley Roos wrote:
I'd love if this wouldn't be a monkey patch.
So would I, but I couldn't find another way in this case.
Also, there is nothing that makes this integrate correctly with
If you catalog an object, then search for it and then abort the
transaction, your cache will have data in it that isn't committed.
Kind of like how I came to the same conclusion in parallel to you and
stuffed up this thread :)
Additionally when another transaction is already running in
Additionally when another transaction is already running in parallel, it
will see cache inserts from other transactions.
A possible solution is to keep a module level cache which can be
committed to the memcache on transaction boundaries. That way I'll
incur no performance penalty.
H
On Sat, 2008-10-25 at 14:55 +0200, Hedley Roos wrote:
In addition, you need to include a serial in your cache keys to avoid
dirty reads.
The cache invalidation code actively removes items from the cache. Am
I understanding you correctly?
I wasn't even talking about invalidation as your
14 matches
Mail list logo